Test Report: Docker_Linux_crio 21808

                    
                      db33af8e7a29a5e500790b374373258f8b494afd:2025-12-17:42825
                    
                

Test fail (27/415)

x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable volcano --alsologtostderr -v=1: exit status 11 (275.241006ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:16:52.737663 1682692 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:16:52.737902 1682692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:16:52.737911 1682692 out.go:374] Setting ErrFile to fd 2...
	I1217 11:16:52.737915 1682692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:16:52.738128 1682692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:16:52.738442 1682692 mustload.go:66] Loading cluster: addons-767877
	I1217 11:16:52.738792 1682692 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:16:52.738810 1682692 addons.go:622] checking whether the cluster is paused
	I1217 11:16:52.738894 1682692 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:16:52.738908 1682692 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:16:52.739313 1682692 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:16:52.759382 1682692 ssh_runner.go:195] Run: systemctl --version
	I1217 11:16:52.759452 1682692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:16:52.779941 1682692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:16:52.873975 1682692 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:16:52.874059 1682692 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:16:52.906967 1682692 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:16:52.907002 1682692 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:16:52.907009 1682692 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:16:52.907013 1682692 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:16:52.907017 1682692 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:16:52.907024 1682692 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:16:52.907028 1682692 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:16:52.907032 1682692 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:16:52.907036 1682692 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:16:52.907051 1682692 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:16:52.907057 1682692 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:16:52.907062 1682692 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:16:52.907067 1682692 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:16:52.907073 1682692 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:16:52.907078 1682692 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:16:52.907101 1682692 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:16:52.907113 1682692 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:16:52.907121 1682692 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:16:52.907124 1682692 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:16:52.907128 1682692 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:16:52.907132 1682692 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:16:52.907137 1682692 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:16:52.907142 1682692 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:16:52.907149 1682692 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:16:52.907154 1682692 cri.go:89] found id: ""
	I1217 11:16:52.907221 1682692 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:16:52.923212 1682692 out.go:203] 
	W1217 11:16:52.925351 1682692 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:16:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:16:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:16:52.925392 1682692 out.go:285] * 
	* 
	W1217 11:16:52.931802 1682692 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:16:52.933575 1682692 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.872193ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-lc6z2" [77026c72-37e6-4dc9-9673-5b57193721c6] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003122157s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-ffwc5" [e44db6b2-7737-4ce0-a9de-3dee51ff3715] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003875423s
addons_test.go:394: (dbg) Run:  kubectl --context addons-767877 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-767877 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-767877 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.190357032s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 ip
2025/12/17 11:17:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable registry --alsologtostderr -v=1: exit status 11 (264.913575ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:16.277793 1684649 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:16.278049 1684649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:16.278060 1684649 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:16.278067 1684649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:16.278269 1684649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:16.278601 1684649 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:16.278972 1684649 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:16.278996 1684649 addons.go:622] checking whether the cluster is paused
	I1217 11:17:16.279118 1684649 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:16.279138 1684649 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:16.279669 1684649 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:16.298919 1684649 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:16.298993 1684649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:16.317628 1684649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:16.415576 1684649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:16.415750 1684649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:16.450253 1684649 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:16.450289 1684649 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:16.450293 1684649 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:16.450296 1684649 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:16.450299 1684649 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:16.450303 1684649 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:16.450318 1684649 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:16.450320 1684649 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:16.450323 1684649 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:16.450335 1684649 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:16.450338 1684649 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:16.450340 1684649 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:16.450343 1684649 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:16.450345 1684649 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:16.450348 1684649 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:16.450355 1684649 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:16.450357 1684649 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:16.450362 1684649 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:16.450364 1684649 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:16.450367 1684649 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:16.450372 1684649 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:16.450376 1684649 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:16.450380 1684649 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:16.450384 1684649 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:16.450388 1684649 cri.go:89] found id: ""
	I1217 11:17:16.450447 1684649 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:16.466720 1684649 out.go:203] 
	W1217 11:17:16.468288 1684649 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:16.468310 1684649 out.go:285] * 
	* 
	W1217 11:17:16.474764 1684649 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:16.476576 1684649 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.70s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.906199ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-767877
addons_test.go:334: (dbg) Run:  kubectl --context addons-767877 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (311.859226ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:08.320335 1683343 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:08.320806 1683343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:08.320889 1683343 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:08.320907 1683343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:08.321359 1683343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:08.321901 1683343 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:08.322774 1683343 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:08.322807 1683343 addons.go:622] checking whether the cluster is paused
	I1217 11:17:08.322957 1683343 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:08.322985 1683343 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:08.323612 1683343 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:08.346715 1683343 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:08.346791 1683343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:08.372217 1683343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:08.483653 1683343 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:08.483764 1683343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:08.527999 1683343 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:08.528033 1683343 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:08.528038 1683343 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:08.528043 1683343 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:08.528047 1683343 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:08.528052 1683343 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:08.528057 1683343 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:08.528061 1683343 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:08.528066 1683343 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:08.528076 1683343 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:08.528087 1683343 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:08.528092 1683343 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:08.528096 1683343 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:08.528101 1683343 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:08.528106 1683343 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:08.528117 1683343 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:08.528121 1683343 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:08.528127 1683343 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:08.528132 1683343 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:08.528136 1683343 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:08.528143 1683343 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:08.528150 1683343 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:08.528154 1683343 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:08.528158 1683343 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:08.528162 1683343 cri.go:89] found id: ""
	I1217 11:17:08.528214 1683343 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:08.548844 1683343 out.go:203] 
	W1217 11:17:08.550631 1683343 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:08.550660 1683343 out.go:285] * 
	* 
	W1217 11:17:08.559205 1683343 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:08.560873 1683343 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-767877 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-767877 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-767877 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [ce1baf7b-4386-4a23-841e-04fd0209f7f9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [ce1baf7b-4386-4a23-841e-04fd0209f7f9] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003336692s
I1217 11:17:18.558975 1672941 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.874753897s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-767877 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-767877
helpers_test.go:244: (dbg) docker inspect addons-767877:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336",
	        "Created": "2025-12-17T11:15:26.334854184Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1675422,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:15:26.369931962Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336/hosts",
	        "LogPath": "/var/lib/docker/containers/1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336/1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336-json.log",
	        "Name": "/addons-767877",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-767877:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-767877",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336",
	                "LowerDir": "/var/lib/docker/overlay2/d686513fabf00340aa7f7cea53208b69d40d5068b9b68ab521ff6c994f6321e9-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d686513fabf00340aa7f7cea53208b69d40d5068b9b68ab521ff6c994f6321e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d686513fabf00340aa7f7cea53208b69d40d5068b9b68ab521ff6c994f6321e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d686513fabf00340aa7f7cea53208b69d40d5068b9b68ab521ff6c994f6321e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-767877",
	                "Source": "/var/lib/docker/volumes/addons-767877/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-767877",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-767877",
	                "name.minikube.sigs.k8s.io": "addons-767877",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "20345ac2be468ba91a09fd7e152a97351d3afcc356bd5df2c07f464fbab12a31",
	            "SandboxKey": "/var/run/docker/netns/20345ac2be46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34301"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34302"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34305"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34303"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34304"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-767877": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6830fa5438c4af4872bd8e8877338ec3c8fbdb0a5061b2fd55580305f7682b2f",
	                    "EndpointID": "2b336b2a24e44d552e0fb9f92d832810b84b5357492c240ca5f38e2da5188569",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "52:52:20:5a:0f:03",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-767877",
	                        "1ab21d83dadc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-767877 -n addons-767877
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-767877 logs -n 25: (1.292681284s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-011622 --alsologtostderr --binary-mirror http://127.0.0.1:38139 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-011622 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	│ delete  │ -p binary-mirror-011622                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-011622 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ addons  │ enable dashboard -p addons-767877                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	│ addons  │ disable dashboard -p addons-767877                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	│ start   │ -p addons-767877 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:16 UTC │
	│ addons  │ addons-767877 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:16 UTC │                     │
	│ addons  │ addons-767877 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-767877                                                                                                                                                                                                                                                                                                                                                                                           │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-767877 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ ip      │ addons-767877 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-767877 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ ssh     │ addons-767877 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ ssh     │ addons-767877 ssh cat /opt/local-path-provisioner/pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-767877 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ enable headlamp -p addons-767877 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ ip      │ addons-767877 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-767877        │ jenkins │ v1.37.0 │ 17 Dec 25 11:19 UTC │ 17 Dec 25 11:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:15:06
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:15:06.114465 1674764 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:15:06.114607 1674764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:15:06.114613 1674764 out.go:374] Setting ErrFile to fd 2...
	I1217 11:15:06.114617 1674764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:15:06.114809 1674764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:15:06.115426 1674764 out.go:368] Setting JSON to false
	I1217 11:15:06.116305 1674764 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":17851,"bootTime":1765952255,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:15:06.116381 1674764 start.go:143] virtualization: kvm guest
	I1217 11:15:06.118487 1674764 out.go:179] * [addons-767877] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:15:06.120344 1674764 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:15:06.120353 1674764 notify.go:221] Checking for updates...
	I1217 11:15:06.123159 1674764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:15:06.124504 1674764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:15:06.125696 1674764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:15:06.126916 1674764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:15:06.128269 1674764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:15:06.129712 1674764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:15:06.154073 1674764 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:15:06.154233 1674764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:15:06.213973 1674764 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-17 11:15:06.204282649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:15:06.214108 1674764 docker.go:319] overlay module found
	I1217 11:15:06.216097 1674764 out.go:179] * Using the docker driver based on user configuration
	I1217 11:15:06.217691 1674764 start.go:309] selected driver: docker
	I1217 11:15:06.217711 1674764 start.go:927] validating driver "docker" against <nil>
	I1217 11:15:06.217729 1674764 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:15:06.218364 1674764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:15:06.275997 1674764 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-17 11:15:06.265923757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:15:06.276152 1674764 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:15:06.276385 1674764 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:15:06.278454 1674764 out.go:179] * Using Docker driver with root privileges
	I1217 11:15:06.279917 1674764 cni.go:84] Creating CNI manager for ""
	I1217 11:15:06.279982 1674764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:15:06.279996 1674764 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:15:06.280067 1674764 start.go:353] cluster config:
	{Name:addons-767877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-767877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 11:15:06.281474 1674764 out.go:179] * Starting "addons-767877" primary control-plane node in "addons-767877" cluster
	I1217 11:15:06.282736 1674764 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:15:06.284083 1674764 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:15:06.285412 1674764 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:15:06.285450 1674764 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:15:06.285462 1674764 cache.go:65] Caching tarball of preloaded images
	I1217 11:15:06.285500 1674764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:15:06.285589 1674764 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:15:06.285603 1674764 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 11:15:06.285956 1674764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/config.json ...
	I1217 11:15:06.285986 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/config.json: {Name:mkeb4331b0b9b75b09c1c790cf4a0f31e90d34b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:06.303578 1674764 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 11:15:06.303710 1674764 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 11:15:06.303728 1674764 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 11:15:06.303733 1674764 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 11:15:06.303741 1674764 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 11:15:06.303748 1674764 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1217 11:15:19.518860 1674764 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1217 11:15:19.518939 1674764 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:15:19.519017 1674764 start.go:360] acquireMachinesLock for addons-767877: {Name:mka931babc38735da6b7f52b3f5f8ca18e84efc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:15:19.519147 1674764 start.go:364] duration metric: took 104.066µs to acquireMachinesLock for "addons-767877"
	I1217 11:15:19.519183 1674764 start.go:93] Provisioning new machine with config: &{Name:addons-767877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-767877 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:15:19.519282 1674764 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:15:19.521207 1674764 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 11:15:19.521489 1674764 start.go:159] libmachine.API.Create for "addons-767877" (driver="docker")
	I1217 11:15:19.521548 1674764 client.go:173] LocalClient.Create starting
	I1217 11:15:19.521682 1674764 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem
	I1217 11:15:19.627142 1674764 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem
	I1217 11:15:19.692960 1674764 cli_runner.go:164] Run: docker network inspect addons-767877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:15:19.710577 1674764 cli_runner.go:211] docker network inspect addons-767877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:15:19.710648 1674764 network_create.go:284] running [docker network inspect addons-767877] to gather additional debugging logs...
	I1217 11:15:19.710673 1674764 cli_runner.go:164] Run: docker network inspect addons-767877
	W1217 11:15:19.727981 1674764 cli_runner.go:211] docker network inspect addons-767877 returned with exit code 1
	I1217 11:15:19.728012 1674764 network_create.go:287] error running [docker network inspect addons-767877]: docker network inspect addons-767877: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-767877 not found
	I1217 11:15:19.728041 1674764 network_create.go:289] output of [docker network inspect addons-767877]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-767877 not found
	
	** /stderr **
	I1217 11:15:19.728162 1674764 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:15:19.746291 1674764 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f89a00}
	I1217 11:15:19.746345 1674764 network_create.go:124] attempt to create docker network addons-767877 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 11:15:19.746393 1674764 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-767877 addons-767877
	I1217 11:15:19.793769 1674764 network_create.go:108] docker network addons-767877 192.168.49.0/24 created
	I1217 11:15:19.793801 1674764 kic.go:121] calculated static IP "192.168.49.2" for the "addons-767877" container
	I1217 11:15:19.793861 1674764 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:15:19.811159 1674764 cli_runner.go:164] Run: docker volume create addons-767877 --label name.minikube.sigs.k8s.io=addons-767877 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:15:19.831370 1674764 oci.go:103] Successfully created a docker volume addons-767877
	I1217 11:15:19.831483 1674764 cli_runner.go:164] Run: docker run --rm --name addons-767877-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-767877 --entrypoint /usr/bin/test -v addons-767877:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:15:22.393273 1674764 cli_runner.go:217] Completed: docker run --rm --name addons-767877-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-767877 --entrypoint /usr/bin/test -v addons-767877:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (2.561726319s)
	I1217 11:15:22.393308 1674764 oci.go:107] Successfully prepared a docker volume addons-767877
	I1217 11:15:22.393364 1674764 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:15:22.393375 1674764 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:15:22.393433 1674764 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-767877:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:15:26.258632 1674764 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-767877:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.865141769s)
	I1217 11:15:26.258679 1674764 kic.go:203] duration metric: took 3.865292775s to extract preloaded images to volume ...
	W1217 11:15:26.258783 1674764 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 11:15:26.258819 1674764 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 11:15:26.258860 1674764 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:15:26.317894 1674764 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-767877 --name addons-767877 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-767877 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-767877 --network addons-767877 --ip 192.168.49.2 --volume addons-767877:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:15:26.603355 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Running}}
	I1217 11:15:26.623422 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:26.644008 1674764 cli_runner.go:164] Run: docker exec addons-767877 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:15:26.693434 1674764 oci.go:144] the created container "addons-767877" has a running status.
	I1217 11:15:26.693466 1674764 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa...
	I1217 11:15:26.749665 1674764 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 11:15:26.789006 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:26.808629 1674764 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:15:26.808657 1674764 kic_runner.go:114] Args: [docker exec --privileged addons-767877 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:15:26.878015 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:26.899057 1674764 machine.go:94] provisionDockerMachine start ...
	I1217 11:15:26.899168 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:26.925732 1674764 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:26.926045 1674764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34301 <nil> <nil>}
	I1217 11:15:26.926064 1674764 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:15:26.927122 1674764 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43040->127.0.0.1:34301: read: connection reset by peer
	I1217 11:15:30.060482 1674764 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-767877
	
	I1217 11:15:30.060517 1674764 ubuntu.go:182] provisioning hostname "addons-767877"
	I1217 11:15:30.060617 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:30.081158 1674764 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:30.081435 1674764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34301 <nil> <nil>}
	I1217 11:15:30.081454 1674764 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-767877 && echo "addons-767877" | sudo tee /etc/hostname
	I1217 11:15:30.223752 1674764 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-767877
	
	I1217 11:15:30.223842 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:30.243897 1674764 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:30.244133 1674764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34301 <nil> <nil>}
	I1217 11:15:30.244150 1674764 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-767877' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-767877/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-767877' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:15:30.376671 1674764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:15:30.376717 1674764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:15:30.376752 1674764 ubuntu.go:190] setting up certificates
	I1217 11:15:30.376771 1674764 provision.go:84] configureAuth start
	I1217 11:15:30.376847 1674764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-767877
	I1217 11:15:30.398209 1674764 provision.go:143] copyHostCerts
	I1217 11:15:30.398323 1674764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:15:30.398519 1674764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:15:30.398658 1674764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:15:30.398749 1674764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.addons-767877 san=[127.0.0.1 192.168.49.2 addons-767877 localhost minikube]
	I1217 11:15:30.499086 1674764 provision.go:177] copyRemoteCerts
	I1217 11:15:30.499166 1674764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:15:30.499218 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:30.519512 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:30.617133 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:15:30.638750 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 11:15:30.658674 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 11:15:30.678344 1674764 provision.go:87] duration metric: took 301.553303ms to configureAuth
	I1217 11:15:30.678382 1674764 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:15:30.678613 1674764 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:15:30.678733 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:30.697884 1674764 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:30.698122 1674764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34301 <nil> <nil>}
	I1217 11:15:30.698138 1674764 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:15:30.978507 1674764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:15:30.978555 1674764 machine.go:97] duration metric: took 4.07945213s to provisionDockerMachine
	I1217 11:15:30.978586 1674764 client.go:176] duration metric: took 11.457026019s to LocalClient.Create
	I1217 11:15:30.978604 1674764 start.go:167] duration metric: took 11.457118325s to libmachine.API.Create "addons-767877"
	I1217 11:15:30.978611 1674764 start.go:293] postStartSetup for "addons-767877" (driver="docker")
	I1217 11:15:30.978621 1674764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:15:30.978683 1674764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:15:30.978721 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:30.998969 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:31.096174 1674764 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:15:31.100106 1674764 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:15:31.100143 1674764 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:15:31.100160 1674764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:15:31.100232 1674764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:15:31.100265 1674764 start.go:296] duration metric: took 121.646716ms for postStartSetup
	I1217 11:15:31.100620 1674764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-767877
	I1217 11:15:31.118981 1674764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/config.json ...
	I1217 11:15:31.119253 1674764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:15:31.119311 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:31.137454 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:31.227831 1674764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:15:31.232441 1674764 start.go:128] duration metric: took 11.713131095s to createHost
	I1217 11:15:31.232469 1674764 start.go:83] releasing machines lock for "addons-767877", held for 11.713307547s
	I1217 11:15:31.232559 1674764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-767877
	I1217 11:15:31.250357 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:15:31.250434 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:15:31.250468 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:15:31.250506 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	W1217 11:15:31.250616 1674764 start.go:789] pre-probe CA setup failed: create ca cert file asset for /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt: stat: stat /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt: no such file or directory
	I1217 11:15:31.250715 1674764 ssh_runner.go:195] Run: cat /version.json
	I1217 11:15:31.250770 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:31.250830 1674764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:15:31.250937 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:31.268778 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:31.270787 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:31.358856 1674764 ssh_runner.go:195] Run: systemctl --version
	I1217 11:15:31.411652 1674764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:15:31.448442 1674764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:15:31.453231 1674764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:15:31.453291 1674764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:15:31.481284 1674764 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 11:15:31.481308 1674764 start.go:496] detecting cgroup driver to use...
	I1217 11:15:31.481347 1674764 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:15:31.481392 1674764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:15:31.498983 1674764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:15:31.512560 1674764 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:15:31.512627 1674764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:15:31.530285 1674764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:15:31.548560 1674764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:15:31.633820 1674764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:15:31.724060 1674764 docker.go:234] disabling docker service ...
	I1217 11:15:31.724132 1674764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:15:31.743940 1674764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:15:31.757394 1674764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:15:31.844943 1674764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:15:31.929733 1674764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:15:31.943023 1674764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:15:31.958060 1674764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:15:31.958117 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:31.969117 1674764 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:15:31.969190 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:31.978672 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:31.987830 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:31.997592 1674764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:15:32.006546 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:32.016010 1674764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:32.030611 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:32.041363 1674764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:15:32.049304 1674764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:15:32.057259 1674764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:15:32.136706 1674764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:15:32.270812 1674764 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:15:32.270897 1674764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:15:32.275242 1674764 start.go:564] Will wait 60s for crictl version
	I1217 11:15:32.275314 1674764 ssh_runner.go:195] Run: which crictl
	I1217 11:15:32.279124 1674764 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:15:32.304271 1674764 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:15:32.304371 1674764 ssh_runner.go:195] Run: crio --version
	I1217 11:15:32.334376 1674764 ssh_runner.go:195] Run: crio --version
	I1217 11:15:32.365733 1674764 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 11:15:32.367152 1674764 cli_runner.go:164] Run: docker network inspect addons-767877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:15:32.386016 1674764 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 11:15:32.390444 1674764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:15:32.401566 1674764 kubeadm.go:884] updating cluster {Name:addons-767877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-767877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:15:32.401741 1674764 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:15:32.401829 1674764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:15:32.436508 1674764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:15:32.436541 1674764 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:15:32.436602 1674764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:15:32.463779 1674764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:15:32.463802 1674764 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:15:32.463811 1674764 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 11:15:32.463916 1674764 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-767877 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-767877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:15:32.463995 1674764 ssh_runner.go:195] Run: crio config
	I1217 11:15:32.511266 1674764 cni.go:84] Creating CNI manager for ""
	I1217 11:15:32.511292 1674764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:15:32.511314 1674764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:15:32.511342 1674764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-767877 NodeName:addons-767877 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:15:32.511497 1674764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-767877"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:15:32.511594 1674764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:15:32.520352 1674764 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:15:32.520416 1674764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:15:32.529143 1674764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 11:15:32.543392 1674764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:15:32.560158 1674764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 11:15:32.574385 1674764 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:15:32.578555 1674764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:15:32.589272 1674764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:15:32.674414 1674764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:15:32.697384 1674764 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877 for IP: 192.168.49.2
	I1217 11:15:32.697408 1674764 certs.go:195] generating shared ca certs ...
	I1217 11:15:32.697430 1674764 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:32.697602 1674764 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:15:32.887689 1674764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt ...
	I1217 11:15:32.887725 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt: {Name:mk4882739fd469c3954287ada1b0e38cfbfbf4a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:32.887928 1674764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key ...
	I1217 11:15:32.887942 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key: {Name:mk1b9b63fe2ddf00e259a101090a4a6e1bd0e44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:32.888018 1674764 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:15:33.017709 1674764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt ...
	I1217 11:15:33.017749 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt: {Name:mkfefe589cf8069373711c0cc560b187f2d4aab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.017929 1674764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key ...
	I1217 11:15:33.017942 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key: {Name:mk75206f47b10a40f6c55cbdc3cbc8af52e382ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.018023 1674764 certs.go:257] generating profile certs ...
	I1217 11:15:33.018082 1674764 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.key
	I1217 11:15:33.018097 1674764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt with IP's: []
	I1217 11:15:33.194733 1674764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt ...
	I1217 11:15:33.194777 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: {Name:mk1c2c064493309c6e1adec623609d0753c01230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.194967 1674764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.key ...
	I1217 11:15:33.194979 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.key: {Name:mkc6610e0c5152a4cc6f2bfe0238bd6a86fc868a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.195051 1674764 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key.5efcf61a
	I1217 11:15:33.195070 1674764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt.5efcf61a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 11:15:33.234173 1674764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt.5efcf61a ...
	I1217 11:15:33.234202 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt.5efcf61a: {Name:mk87f54c754bcb7639ff086b5eb3c02f2f042164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.234400 1674764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key.5efcf61a ...
	I1217 11:15:33.234415 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key.5efcf61a: {Name:mk630f3083d13c62827302b94c60c0a3076d37e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.234496 1674764 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt.5efcf61a -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt
	I1217 11:15:33.234612 1674764 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key.5efcf61a -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key
	I1217 11:15:33.234681 1674764 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.key
	I1217 11:15:33.234704 1674764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.crt with IP's: []
	I1217 11:15:33.266045 1674764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.crt ...
	I1217 11:15:33.266077 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.crt: {Name:mk655ebc418e7414edef9c1e9923b774e576ba24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.266242 1674764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.key ...
	I1217 11:15:33.266258 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.key: {Name:mka920f2e6ef710d61dafb5798cdc1b38d5a7abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.266430 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:15:33.266511 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:15:33.266557 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:15:33.266599 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:15:33.267217 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:15:33.286836 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:15:33.305257 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:15:33.324225 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:15:33.343170 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 11:15:33.361338 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:15:33.380164 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:15:33.398933 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:15:33.417425 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:15:33.438333 1674764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:15:33.451773 1674764 ssh_runner.go:195] Run: openssl version
	I1217 11:15:33.458333 1674764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:33.466626 1674764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:15:33.477631 1674764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:33.482006 1674764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:33.482065 1674764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:33.516993 1674764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:15:33.525550 1674764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:15:33.533954 1674764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:15:33.538147 1674764 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:15:33.538240 1674764 kubeadm.go:401] StartCluster: {Name:addons-767877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-767877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:15:33.538341 1674764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:15:33.538418 1674764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:15:33.572833 1674764 cri.go:89] found id: ""
	I1217 11:15:33.572933 1674764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:15:33.581515 1674764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:15:33.590189 1674764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:15:33.590252 1674764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:15:33.598421 1674764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:15:33.598457 1674764 kubeadm.go:158] found existing configuration files:
	
	I1217 11:15:33.598512 1674764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:15:33.606627 1674764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:15:33.606683 1674764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:15:33.614520 1674764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:15:33.622568 1674764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:15:33.622640 1674764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:15:33.630334 1674764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:15:33.638281 1674764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:15:33.638333 1674764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:15:33.646090 1674764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:15:33.654322 1674764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:15:33.654379 1674764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:15:33.662432 1674764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:15:33.701495 1674764 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 11:15:33.701567 1674764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:15:33.723934 1674764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:15:33.724026 1674764 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:15:33.724068 1674764 kubeadm.go:319] OS: Linux
	I1217 11:15:33.724123 1674764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:15:33.724168 1674764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:15:33.724214 1674764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:15:33.724256 1674764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:15:33.724296 1674764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:15:33.724384 1674764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:15:33.724482 1674764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:15:33.724555 1674764 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 11:15:33.784667 1674764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:15:33.784806 1674764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:15:33.785005 1674764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:15:33.793118 1674764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:15:33.795464 1674764 out.go:252]   - Generating certificates and keys ...
	I1217 11:15:33.795604 1674764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:15:33.795707 1674764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:15:33.999124 1674764 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:15:34.137796 1674764 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:15:34.412578 1674764 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:15:34.632112 1674764 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:15:34.806825 1674764 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:15:34.806969 1674764 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-767877 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 11:15:34.942934 1674764 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:15:34.943121 1674764 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-767877 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 11:15:35.200820 1674764 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:15:35.594983 1674764 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:15:36.110517 1674764 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:15:36.110654 1674764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:15:36.182240 1674764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:15:36.487965 1674764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:15:36.986595 1674764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:15:37.353997 1674764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:15:38.249958 1674764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:15:38.250267 1674764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:15:38.255198 1674764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:15:38.256768 1674764 out.go:252]   - Booting up control plane ...
	I1217 11:15:38.256903 1674764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:15:38.257031 1674764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:15:38.257719 1674764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:15:38.272014 1674764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:15:38.272169 1674764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:15:38.278632 1674764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:15:38.278751 1674764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:15:38.278830 1674764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:15:38.379862 1674764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:15:38.380037 1674764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:15:39.380847 1674764 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001145644s
	I1217 11:15:39.383816 1674764 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 11:15:39.383911 1674764 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 11:15:39.383995 1674764 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 11:15:39.384126 1674764 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 11:15:40.701470 1674764 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.317491754s
	I1217 11:15:41.948655 1674764 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.564798441s
	I1217 11:15:43.885735 1674764 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501781912s
	I1217 11:15:43.902445 1674764 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:15:43.915295 1674764 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:15:43.925306 1674764 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:15:43.925677 1674764 kubeadm.go:319] [mark-control-plane] Marking the node addons-767877 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:15:43.934699 1674764 kubeadm.go:319] [bootstrap-token] Using token: piq0we.dhk2ndq2cma16lft
	I1217 11:15:43.936706 1674764 out.go:252]   - Configuring RBAC rules ...
	I1217 11:15:43.936873 1674764 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:15:43.943316 1674764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:15:43.949204 1674764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:15:43.952023 1674764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:15:43.954918 1674764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:15:43.959243 1674764 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:15:44.292559 1674764 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:15:44.708976 1674764 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:15:45.292197 1674764 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:15:45.293348 1674764 kubeadm.go:319] 
	I1217 11:15:45.293450 1674764 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:15:45.293461 1674764 kubeadm.go:319] 
	I1217 11:15:45.293600 1674764 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:15:45.293611 1674764 kubeadm.go:319] 
	I1217 11:15:45.293648 1674764 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:15:45.293735 1674764 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:15:45.293785 1674764 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:15:45.293817 1674764 kubeadm.go:319] 
	I1217 11:15:45.293879 1674764 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:15:45.293890 1674764 kubeadm.go:319] 
	I1217 11:15:45.293963 1674764 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:15:45.293976 1674764 kubeadm.go:319] 
	I1217 11:15:45.294053 1674764 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:15:45.294127 1674764 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:15:45.294209 1674764 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:15:45.294226 1674764 kubeadm.go:319] 
	I1217 11:15:45.294296 1674764 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:15:45.294383 1674764 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:15:45.294391 1674764 kubeadm.go:319] 
	I1217 11:15:45.294471 1674764 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token piq0we.dhk2ndq2cma16lft \
	I1217 11:15:45.294639 1674764 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:15:45.294669 1674764 kubeadm.go:319] 	--control-plane 
	I1217 11:15:45.294673 1674764 kubeadm.go:319] 
	I1217 11:15:45.294745 1674764 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:15:45.294757 1674764 kubeadm.go:319] 
	I1217 11:15:45.294871 1674764 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token piq0we.dhk2ndq2cma16lft \
	I1217 11:15:45.294961 1674764 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:15:45.297200 1674764 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:15:45.297325 1674764 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:15:45.297364 1674764 cni.go:84] Creating CNI manager for ""
	I1217 11:15:45.297379 1674764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:15:45.299607 1674764 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 11:15:45.301167 1674764 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:15:45.305858 1674764 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 11:15:45.305883 1674764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:15:45.320247 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:15:45.537713 1674764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:15:45.537786 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-767877 minikube.k8s.io/updated_at=2025_12_17T11_15_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=addons-767877 minikube.k8s.io/primary=true
	I1217 11:15:45.537786 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:45.549843 1674764 ops.go:34] apiserver oom_adj: -16
	I1217 11:15:45.629360 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:46.129837 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:46.630166 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:47.130167 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:47.629516 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:48.129857 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:48.629668 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:49.129775 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:49.630480 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:50.130070 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:50.206854 1674764 kubeadm.go:1114] duration metric: took 4.669130286s to wait for elevateKubeSystemPrivileges
	I1217 11:15:50.206891 1674764 kubeadm.go:403] duration metric: took 16.668683367s to StartCluster
	I1217 11:15:50.206912 1674764 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:50.207031 1674764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:15:50.207568 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:50.207808 1674764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:15:50.207828 1674764 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:15:50.207910 1674764 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 11:15:50.208044 1674764 addons.go:70] Setting yakd=true in profile "addons-767877"
	I1217 11:15:50.208052 1674764 addons.go:70] Setting default-storageclass=true in profile "addons-767877"
	I1217 11:15:50.208063 1674764 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:15:50.208074 1674764 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-767877"
	I1217 11:15:50.208073 1674764 addons.go:70] Setting cloud-spanner=true in profile "addons-767877"
	I1217 11:15:50.208095 1674764 addons.go:70] Setting storage-provisioner=true in profile "addons-767877"
	I1217 11:15:50.208107 1674764 addons.go:239] Setting addon storage-provisioner=true in "addons-767877"
	I1217 11:15:50.208119 1674764 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-767877"
	I1217 11:15:50.208122 1674764 addons.go:70] Setting ingress-dns=true in profile "addons-767877"
	I1217 11:15:50.208130 1674764 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-767877"
	I1217 11:15:50.208135 1674764 addons.go:239] Setting addon ingress-dns=true in "addons-767877"
	I1217 11:15:50.208152 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208171 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208173 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208152 1674764 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-767877"
	I1217 11:15:50.208178 1674764 addons.go:70] Setting gcp-auth=true in profile "addons-767877"
	I1217 11:15:50.208211 1674764 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-767877"
	I1217 11:15:50.208223 1674764 mustload.go:66] Loading cluster: addons-767877
	I1217 11:15:50.208262 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208451 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208481 1674764 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:15:50.208612 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208664 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208690 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208762 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208865 1674764 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-767877"
	I1217 11:15:50.208924 1674764 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-767877"
	I1217 11:15:50.208954 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.209009 1674764 addons.go:70] Setting volcano=true in profile "addons-767877"
	I1217 11:15:50.209028 1674764 addons.go:239] Setting addon volcano=true in "addons-767877"
	I1217 11:15:50.209066 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208076 1674764 addons.go:70] Setting registry=true in profile "addons-767877"
	I1217 11:15:50.209203 1674764 addons.go:239] Setting addon registry=true in "addons-767877"
	I1217 11:15:50.209237 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.209397 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.209561 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.209712 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208110 1674764 addons.go:239] Setting addon cloud-spanner=true in "addons-767877"
	I1217 11:15:50.209934 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.210401 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.210933 1674764 addons.go:70] Setting inspektor-gadget=true in profile "addons-767877"
	I1217 11:15:50.210960 1674764 addons.go:239] Setting addon inspektor-gadget=true in "addons-767877"
	I1217 11:15:50.210990 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.211489 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.211623 1674764 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-767877"
	I1217 11:15:50.211645 1674764 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-767877"
	I1217 11:15:50.211972 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208068 1674764 addons.go:239] Setting addon yakd=true in "addons-767877"
	I1217 11:15:50.212579 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208086 1674764 addons.go:70] Setting registry-creds=true in profile "addons-767877"
	I1217 11:15:50.215653 1674764 addons.go:239] Setting addon registry-creds=true in "addons-767877"
	I1217 11:15:50.215713 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208766 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.213706 1674764 addons.go:70] Setting ingress=true in profile "addons-767877"
	I1217 11:15:50.215910 1674764 addons.go:239] Setting addon ingress=true in "addons-767877"
	I1217 11:15:50.215971 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.213754 1674764 out.go:179] * Verifying Kubernetes components...
	I1217 11:15:50.216129 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.213963 1674764 addons.go:70] Setting volumesnapshots=true in profile "addons-767877"
	I1217 11:15:50.216225 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.216235 1674764 addons.go:239] Setting addon volumesnapshots=true in "addons-767877"
	I1217 11:15:50.216281 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.213981 1674764 addons.go:70] Setting metrics-server=true in profile "addons-767877"
	I1217 11:15:50.216707 1674764 addons.go:239] Setting addon metrics-server=true in "addons-767877"
	I1217 11:15:50.216800 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.217911 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.221962 1674764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:15:50.224004 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.224179 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.259849 1674764 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:15:50.261358 1674764 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:15:50.261381 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:15:50.261469 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.266074 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 11:15:50.267427 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 11:15:50.268872 1674764 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 11:15:50.268942 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 11:15:50.273135 1674764 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 11:15:50.273156 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 11:15:50.273227 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.274849 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 11:15:50.274927 1674764 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 11:15:50.276463 1674764 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 11:15:50.276488 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 11:15:50.276585 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.276787 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 11:15:50.279162 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 11:15:50.280585 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 11:15:50.282808 1674764 addons.go:239] Setting addon default-storageclass=true in "addons-767877"
	I1217 11:15:50.285149 1674764 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 11:15:50.286504 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.290022 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.290394 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 11:15:50.290488 1674764 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 11:15:50.290503 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 11:15:50.290572 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.290421 1674764 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-767877"
	I1217 11:15:50.290715 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.290775 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.291242 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.297435 1674764 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1217 11:15:50.298875 1674764 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 11:15:50.298912 1674764 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 11:15:50.298974 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.299146 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 11:15:50.299158 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 11:15:50.299215 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.299820 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 11:15:50.305616 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 11:15:50.305644 1674764 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 11:15:50.305740 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.305986 1674764 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 11:15:50.310932 1674764 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 11:15:50.322695 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 11:15:50.320009 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.322913 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	W1217 11:15:50.322652 1674764 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 11:15:50.331076 1674764 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 11:15:50.331155 1674764 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 11:15:50.331078 1674764 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 11:15:50.331078 1674764 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 11:15:50.332877 1674764 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 11:15:50.332916 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 11:15:50.332980 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.333153 1674764 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 11:15:50.333176 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 11:15:50.333238 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.335508 1674764 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 11:15:50.335626 1674764 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 11:15:50.335639 1674764 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 11:15:50.335723 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.347990 1674764 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 11:15:50.349650 1674764 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 11:15:50.349679 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 11:15:50.349765 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.358667 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.362117 1674764 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 11:15:50.368246 1674764 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 11:15:50.372111 1674764 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 11:15:50.372972 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 11:15:50.373213 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.380228 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.380727 1674764 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:15:50.380745 1674764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:15:50.380813 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.388109 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.388642 1674764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:15:50.396393 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.397611 1674764 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 11:15:50.397824 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.403005 1674764 out.go:179]   - Using image docker.io/busybox:stable
	I1217 11:15:50.404351 1674764 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 11:15:50.404369 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 11:15:50.404449 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.404834 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.411981 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.416389 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.424241 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.428764 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.435293 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.441190 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.452635 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.452842 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	W1217 11:15:50.455375 1674764 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 11:15:50.455421 1674764 retry.go:31] will retry after 343.863702ms: ssh: handshake failed: EOF
	I1217 11:15:50.456700 1674764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:15:50.552037 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 11:15:50.575355 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 11:15:50.576136 1674764 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 11:15:50.576164 1674764 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 11:15:50.587993 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:15:50.592977 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 11:15:50.593001 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 11:15:50.606157 1674764 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 11:15:50.606183 1674764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 11:15:50.610294 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 11:15:50.615186 1674764 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 11:15:50.615217 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 11:15:50.620059 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:15:50.623101 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 11:15:50.626626 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 11:15:50.629041 1674764 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 11:15:50.629067 1674764 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 11:15:50.637696 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 11:15:50.647286 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 11:15:50.647312 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 11:15:50.648294 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 11:15:50.653119 1674764 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 11:15:50.653265 1674764 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 11:15:50.659832 1674764 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 11:15:50.659858 1674764 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 11:15:50.660150 1674764 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 11:15:50.660167 1674764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 11:15:50.682049 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 11:15:50.682154 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 11:15:50.685794 1674764 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 11:15:50.685815 1674764 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 11:15:50.707254 1674764 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 11:15:50.707341 1674764 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 11:15:50.717493 1674764 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 11:15:50.717614 1674764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 11:15:50.718641 1674764 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 11:15:50.718716 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 11:15:50.751315 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 11:15:50.751345 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 11:15:50.757326 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 11:15:50.769285 1674764 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 11:15:50.769305 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 11:15:50.774231 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 11:15:50.774340 1674764 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 11:15:50.788331 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 11:15:50.808260 1674764 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 11:15:50.808289 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 11:15:50.811313 1674764 node_ready.go:35] waiting up to 6m0s for node "addons-767877" to be "Ready" ...
	I1217 11:15:50.811803 1674764 start.go:1013] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 11:15:50.813435 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 11:15:50.813454 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 11:15:50.845704 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 11:15:50.888957 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 11:15:50.889049 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 11:15:50.890981 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 11:15:50.975672 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 11:15:50.975706 1674764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 11:15:51.041756 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 11:15:51.041787 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 11:15:51.047141 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 11:15:51.098080 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 11:15:51.098105 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 11:15:51.148594 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 11:15:51.148631 1674764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 11:15:51.211938 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 11:15:51.317464 1674764 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-767877" context rescaled to 1 replicas
	I1217 11:15:51.641019 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.003280766s)
	I1217 11:15:51.641379 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.014722735s)
	I1217 11:15:51.958876 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.310544014s)
	I1217 11:15:51.958931 1674764 addons.go:495] Verifying addon ingress=true in "addons-767877"
	I1217 11:15:51.959282 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.201845876s)
	I1217 11:15:51.959329 1674764 addons.go:495] Verifying addon metrics-server=true in "addons-767877"
	I1217 11:15:51.959385 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.170929584s)
	I1217 11:15:51.959407 1674764 addons.go:495] Verifying addon registry=true in "addons-767877"
	I1217 11:15:51.959546 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.113797063s)
	I1217 11:15:51.960239 1674764 out.go:179] * Verifying ingress addon...
	I1217 11:15:51.961286 1674764 out.go:179] * Verifying registry addon...
	I1217 11:15:51.963104 1674764 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 11:15:51.963351 1674764 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-767877 service yakd-dashboard -n yakd-dashboard
	
	I1217 11:15:51.964938 1674764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 11:15:51.975794 1674764 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 11:15:51.975822 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:51.977766 1674764 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 11:15:51.977791 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:52.468625 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:52.469200 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:52.475371 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.584343307s)
	W1217 11:15:52.475440 1674764 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 11:15:52.475467 1674764 retry.go:31] will retry after 288.762059ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 11:15:52.475578 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.428408093s)
	I1217 11:15:52.476015 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.264032382s)
	I1217 11:15:52.476046 1674764 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-767877"
	I1217 11:15:52.477967 1674764 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 11:15:52.483953 1674764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 11:15:52.497956 1674764 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 11:15:52.497995 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:52.765300 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1217 11:15:52.815039 1674764 node_ready.go:57] node "addons-767877" has "Ready":"False" status (will retry)
	I1217 11:15:52.967414 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:52.967606 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:52.987347 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:53.467542 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:53.467705 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:53.569984 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:53.967432 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:53.968365 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:53.987396 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:54.466958 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:54.468290 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:54.487521 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1217 11:15:54.815256 1674764 node_ready.go:57] node "addons-767877" has "Ready":"False" status (will retry)
	I1217 11:15:54.966668 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:54.967599 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:54.987313 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:55.271800 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.506451965s)
	I1217 11:15:55.467390 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:55.467818 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:55.487304 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:55.967717 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:55.967771 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:55.987458 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:56.467137 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:56.468625 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:56.487687 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:56.967632 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:56.967737 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:56.987605 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1217 11:15:57.314674 1674764 node_ready.go:57] node "addons-767877" has "Ready":"False" status (will retry)
	I1217 11:15:57.467221 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:57.468371 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:57.487448 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:57.899285 1674764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 11:15:57.899360 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:57.918094 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:57.967231 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:57.968366 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:57.987750 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:58.027257 1674764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 11:15:58.040827 1674764 addons.go:239] Setting addon gcp-auth=true in "addons-767877"
	I1217 11:15:58.040881 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:58.041219 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:58.060014 1674764 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 11:15:58.060081 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:58.079944 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:58.171684 1674764 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 11:15:58.172875 1674764 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 11:15:58.174033 1674764 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 11:15:58.174051 1674764 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 11:15:58.188075 1674764 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 11:15:58.188101 1674764 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 11:15:58.201260 1674764 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 11:15:58.201285 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 11:15:58.214403 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 11:15:58.466815 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:58.467082 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:58.505337 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:58.531836 1674764 addons.go:495] Verifying addon gcp-auth=true in "addons-767877"
	I1217 11:15:58.533330 1674764 out.go:179] * Verifying gcp-auth addon...
	I1217 11:15:58.538008 1674764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 11:15:58.567236 1674764 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 11:15:58.567259 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:15:58.966418 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:58.967776 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:58.987306 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:59.040989 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 11:15:59.315028 1674764 node_ready.go:57] node "addons-767877" has "Ready":"False" status (will retry)
	I1217 11:15:59.467649 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:59.467692 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:59.487705 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:59.568855 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:15:59.966276 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:59.967773 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:59.987438 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:00.041309 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:00.466750 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:00.468415 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:00.487793 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:00.541445 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:00.966895 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:00.968501 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:00.987891 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:01.042107 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:01.466731 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:01.468175 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:01.487430 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:01.542086 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 11:16:01.815147 1674764 node_ready.go:57] node "addons-767877" has "Ready":"False" status (will retry)
	I1217 11:16:01.967296 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:01.967897 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:01.987789 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:02.041684 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:02.467027 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:02.468665 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:02.487860 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:02.542222 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:02.966695 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:02.967664 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:02.987679 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:03.041407 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:03.467061 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:03.468658 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:03.487508 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:03.541506 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:03.966467 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:03.968724 1674764 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 11:16:03.968747 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:03.987707 1674764 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 11:16:03.987731 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:04.040819 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:04.315950 1674764 node_ready.go:49] node "addons-767877" is "Ready"
	I1217 11:16:04.315985 1674764 node_ready.go:38] duration metric: took 13.504627777s for node "addons-767877" to be "Ready" ...
	I1217 11:16:04.316071 1674764 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:16:04.316185 1674764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:16:04.335331 1674764 api_server.go:72] duration metric: took 14.127421304s to wait for apiserver process to appear ...
	I1217 11:16:04.335368 1674764 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:16:04.335393 1674764 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 11:16:04.342759 1674764 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 11:16:04.344001 1674764 api_server.go:141] control plane version: v1.34.3
	I1217 11:16:04.344054 1674764 api_server.go:131] duration metric: took 8.678237ms to wait for apiserver health ...
	I1217 11:16:04.344066 1674764 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:16:04.348966 1674764 system_pods.go:59] 20 kube-system pods found
	I1217 11:16:04.349011 1674764 system_pods.go:61] "amd-gpu-device-plugin-54g7h" [0d30afbe-138e-4eec-b4f9-dc3c0a8c9362] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 11:16:04.349025 1674764 system_pods.go:61] "coredns-66bc5c9577-bk7js" [93210791-8ce9-43e9-9da6-e86d9de52b6f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:16:04.349038 1674764 system_pods.go:61] "csi-hostpath-attacher-0" [eec1eed5-47ac-49ed-a8be-dee549fb94bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 11:16:04.349045 1674764 system_pods.go:61] "csi-hostpath-resizer-0" [5eaaa85c-9ad5-41d8-ac6c-8c4fa13a517c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 11:16:04.349054 1674764 system_pods.go:61] "csi-hostpathplugin-swlsr" [c3ad9360-2599-4b66-a906-94b66525daf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 11:16:04.349060 1674764 system_pods.go:61] "etcd-addons-767877" [e87b5478-e253-4eef-bdbe-d72caaad1864] Running
	I1217 11:16:04.349065 1674764 system_pods.go:61] "kindnet-nkfjh" [d5de55b0-a578-4d51-b058-d52c5a57ab72] Running
	I1217 11:16:04.349070 1674764 system_pods.go:61] "kube-apiserver-addons-767877" [1f0007be-20ae-4b96-a9ef-6f086ff6e9eb] Running
	I1217 11:16:04.349075 1674764 system_pods.go:61] "kube-controller-manager-addons-767877" [fb8ee463-4ef0-4a87-86cd-fd57584b3815] Running
	I1217 11:16:04.349083 1674764 system_pods.go:61] "kube-ingress-dns-minikube" [d0f34d27-69e5-47e3-b44b-96bbc77f4dfe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 11:16:04.349088 1674764 system_pods.go:61] "kube-proxy-dmglt" [93e628dd-43c4-40f5-9d00-5eaeb986dcbd] Running
	I1217 11:16:04.349095 1674764 system_pods.go:61] "kube-scheduler-addons-767877" [0933f37d-9c50-40e4-9e8a-3adba17f3f11] Running
	I1217 11:16:04.349102 1674764 system_pods.go:61] "metrics-server-85b7d694d7-q89cn" [4fe34e06-742e-4967-a029-2bcdc2026e59] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 11:16:04.349111 1674764 system_pods.go:61] "nvidia-device-plugin-daemonset-29qcw" [126659ae-963b-4c25-b391-6b0e5bc691f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 11:16:04.349128 1674764 system_pods.go:61] "registry-6b586f9694-lc6z2" [77026c72-37e6-4dc9-9673-5b57193721c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 11:16:04.349146 1674764 system_pods.go:61] "registry-creds-764b6fb674-crd5v" [ae965757-a78e-4e4e-b450-21f182854184] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 11:16:04.349154 1674764 system_pods.go:61] "registry-proxy-ffwc5" [e44db6b2-7737-4ce0-a9de-3dee51ff3715] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 11:16:04.349165 1674764 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2jdlm" [1319b2f8-69a5-401c-a890-f3b2110a9af0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.349183 1674764 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dm88z" [078ab857-613a-47a2-9d88-499a5c525f59] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.349194 1674764 system_pods.go:61] "storage-provisioner" [d4bd042c-c801-4c4a-98a4-b825d20aad52] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:16:04.349206 1674764 system_pods.go:74] duration metric: took 5.131864ms to wait for pod list to return data ...
	I1217 11:16:04.349220 1674764 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:16:04.351654 1674764 default_sa.go:45] found service account: "default"
	I1217 11:16:04.351682 1674764 default_sa.go:55] duration metric: took 2.454472ms for default service account to be created ...
	I1217 11:16:04.351694 1674764 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:16:04.449634 1674764 system_pods.go:86] 20 kube-system pods found
	I1217 11:16:04.449671 1674764 system_pods.go:89] "amd-gpu-device-plugin-54g7h" [0d30afbe-138e-4eec-b4f9-dc3c0a8c9362] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 11:16:04.449688 1674764 system_pods.go:89] "coredns-66bc5c9577-bk7js" [93210791-8ce9-43e9-9da6-e86d9de52b6f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:16:04.449696 1674764 system_pods.go:89] "csi-hostpath-attacher-0" [eec1eed5-47ac-49ed-a8be-dee549fb94bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 11:16:04.449701 1674764 system_pods.go:89] "csi-hostpath-resizer-0" [5eaaa85c-9ad5-41d8-ac6c-8c4fa13a517c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 11:16:04.449707 1674764 system_pods.go:89] "csi-hostpathplugin-swlsr" [c3ad9360-2599-4b66-a906-94b66525daf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 11:16:04.449711 1674764 system_pods.go:89] "etcd-addons-767877" [e87b5478-e253-4eef-bdbe-d72caaad1864] Running
	I1217 11:16:04.449715 1674764 system_pods.go:89] "kindnet-nkfjh" [d5de55b0-a578-4d51-b058-d52c5a57ab72] Running
	I1217 11:16:04.449719 1674764 system_pods.go:89] "kube-apiserver-addons-767877" [1f0007be-20ae-4b96-a9ef-6f086ff6e9eb] Running
	I1217 11:16:04.449723 1674764 system_pods.go:89] "kube-controller-manager-addons-767877" [fb8ee463-4ef0-4a87-86cd-fd57584b3815] Running
	I1217 11:16:04.449729 1674764 system_pods.go:89] "kube-ingress-dns-minikube" [d0f34d27-69e5-47e3-b44b-96bbc77f4dfe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 11:16:04.449733 1674764 system_pods.go:89] "kube-proxy-dmglt" [93e628dd-43c4-40f5-9d00-5eaeb986dcbd] Running
	I1217 11:16:04.449737 1674764 system_pods.go:89] "kube-scheduler-addons-767877" [0933f37d-9c50-40e4-9e8a-3adba17f3f11] Running
	I1217 11:16:04.449751 1674764 system_pods.go:89] "metrics-server-85b7d694d7-q89cn" [4fe34e06-742e-4967-a029-2bcdc2026e59] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 11:16:04.449759 1674764 system_pods.go:89] "nvidia-device-plugin-daemonset-29qcw" [126659ae-963b-4c25-b391-6b0e5bc691f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 11:16:04.449770 1674764 system_pods.go:89] "registry-6b586f9694-lc6z2" [77026c72-37e6-4dc9-9673-5b57193721c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 11:16:04.449777 1674764 system_pods.go:89] "registry-creds-764b6fb674-crd5v" [ae965757-a78e-4e4e-b450-21f182854184] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 11:16:04.449782 1674764 system_pods.go:89] "registry-proxy-ffwc5" [e44db6b2-7737-4ce0-a9de-3dee51ff3715] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 11:16:04.449790 1674764 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2jdlm" [1319b2f8-69a5-401c-a890-f3b2110a9af0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.449795 1674764 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dm88z" [078ab857-613a-47a2-9d88-499a5c525f59] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.449803 1674764 system_pods.go:89] "storage-provisioner" [d4bd042c-c801-4c4a-98a4-b825d20aad52] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:16:04.449821 1674764 retry.go:31] will retry after 260.899758ms: missing components: kube-dns
	I1217 11:16:04.467279 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:04.467555 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:04.487913 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:04.541545 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:04.716272 1674764 system_pods.go:86] 20 kube-system pods found
	I1217 11:16:04.716308 1674764 system_pods.go:89] "amd-gpu-device-plugin-54g7h" [0d30afbe-138e-4eec-b4f9-dc3c0a8c9362] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 11:16:04.716315 1674764 system_pods.go:89] "coredns-66bc5c9577-bk7js" [93210791-8ce9-43e9-9da6-e86d9de52b6f] Running
	I1217 11:16:04.716322 1674764 system_pods.go:89] "csi-hostpath-attacher-0" [eec1eed5-47ac-49ed-a8be-dee549fb94bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 11:16:04.716330 1674764 system_pods.go:89] "csi-hostpath-resizer-0" [5eaaa85c-9ad5-41d8-ac6c-8c4fa13a517c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 11:16:04.716339 1674764 system_pods.go:89] "csi-hostpathplugin-swlsr" [c3ad9360-2599-4b66-a906-94b66525daf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 11:16:04.716344 1674764 system_pods.go:89] "etcd-addons-767877" [e87b5478-e253-4eef-bdbe-d72caaad1864] Running
	I1217 11:16:04.716349 1674764 system_pods.go:89] "kindnet-nkfjh" [d5de55b0-a578-4d51-b058-d52c5a57ab72] Running
	I1217 11:16:04.716353 1674764 system_pods.go:89] "kube-apiserver-addons-767877" [1f0007be-20ae-4b96-a9ef-6f086ff6e9eb] Running
	I1217 11:16:04.716357 1674764 system_pods.go:89] "kube-controller-manager-addons-767877" [fb8ee463-4ef0-4a87-86cd-fd57584b3815] Running
	I1217 11:16:04.716363 1674764 system_pods.go:89] "kube-ingress-dns-minikube" [d0f34d27-69e5-47e3-b44b-96bbc77f4dfe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 11:16:04.716369 1674764 system_pods.go:89] "kube-proxy-dmglt" [93e628dd-43c4-40f5-9d00-5eaeb986dcbd] Running
	I1217 11:16:04.716373 1674764 system_pods.go:89] "kube-scheduler-addons-767877" [0933f37d-9c50-40e4-9e8a-3adba17f3f11] Running
	I1217 11:16:04.716379 1674764 system_pods.go:89] "metrics-server-85b7d694d7-q89cn" [4fe34e06-742e-4967-a029-2bcdc2026e59] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 11:16:04.716387 1674764 system_pods.go:89] "nvidia-device-plugin-daemonset-29qcw" [126659ae-963b-4c25-b391-6b0e5bc691f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 11:16:04.716392 1674764 system_pods.go:89] "registry-6b586f9694-lc6z2" [77026c72-37e6-4dc9-9673-5b57193721c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 11:16:04.716400 1674764 system_pods.go:89] "registry-creds-764b6fb674-crd5v" [ae965757-a78e-4e4e-b450-21f182854184] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 11:16:04.716405 1674764 system_pods.go:89] "registry-proxy-ffwc5" [e44db6b2-7737-4ce0-a9de-3dee51ff3715] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 11:16:04.716417 1674764 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2jdlm" [1319b2f8-69a5-401c-a890-f3b2110a9af0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.716425 1674764 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dm88z" [078ab857-613a-47a2-9d88-499a5c525f59] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.716431 1674764 system_pods.go:89] "storage-provisioner" [d4bd042c-c801-4c4a-98a4-b825d20aad52] Running
	I1217 11:16:04.716441 1674764 system_pods.go:126] duration metric: took 364.740674ms to wait for k8s-apps to be running ...
	I1217 11:16:04.716450 1674764 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:16:04.716495 1674764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:16:04.730332 1674764 system_svc.go:56] duration metric: took 13.86904ms WaitForService to wait for kubelet
	I1217 11:16:04.730366 1674764 kubeadm.go:587] duration metric: took 14.522509009s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:16:04.730394 1674764 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:16:04.733705 1674764 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:16:04.733735 1674764 node_conditions.go:123] node cpu capacity is 8
	I1217 11:16:04.733751 1674764 node_conditions.go:105] duration metric: took 3.351782ms to run NodePressure ...
	I1217 11:16:04.733762 1674764 start.go:242] waiting for startup goroutines ...
	I1217 11:16:04.967006 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:04.967498 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:04.987672 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:05.041383 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:05.467445 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:05.468622 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:05.487890 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:05.541430 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:05.967963 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:05.968303 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:05.987986 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:06.041841 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:06.467017 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:06.468623 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:06.487816 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:06.541481 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:06.967362 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:06.969038 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:06.988916 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:07.042029 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:07.467039 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:07.468605 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:07.488384 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:07.541841 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:07.968287 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:07.968594 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:07.988080 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:08.042384 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:08.467868 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:08.468359 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:08.488010 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:08.542510 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:08.967421 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:08.968260 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:08.987343 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:09.041194 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:09.467502 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:09.468603 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:09.567955 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:09.568217 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:09.966752 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:09.967640 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:09.987776 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:10.041764 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:10.467839 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:10.467893 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:10.487927 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:10.541846 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:10.969949 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:10.970018 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:10.988072 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:11.044415 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:11.467035 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:11.468733 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:11.488191 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:11.542479 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:11.967642 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:11.968320 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:11.988144 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:12.042337 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:12.467639 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:12.468426 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:12.488230 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:12.542871 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:12.968020 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:12.968055 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:12.987863 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:13.042086 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:13.466608 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:13.468588 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:13.488146 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:13.542193 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:13.968335 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:13.968366 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:13.987893 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:14.041863 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:14.468140 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:14.468207 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:14.487711 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:14.541846 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:14.966665 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:14.968028 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:14.988398 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:15.041110 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:15.467255 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:15.468226 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:15.487512 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:15.703575 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:15.984438 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:15.984578 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:15.986901 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:16.041873 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:16.467681 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:16.468329 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:16.490681 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:16.541525 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:16.967458 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:16.967515 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:16.987320 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:17.041006 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:17.467332 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:17.468111 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:17.487602 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:17.541819 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:17.967448 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:17.967484 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:17.987594 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:18.068033 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:18.467713 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:18.467742 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:18.487796 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:18.541321 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:18.968704 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:18.969203 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:18.987616 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:19.040862 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:19.467825 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:19.468558 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:19.488573 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:19.541868 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:19.968100 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:19.968112 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:19.988979 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:20.042041 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:20.467867 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:20.468463 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:20.488586 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:20.541898 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:20.966469 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:20.967984 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:20.988472 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:21.041420 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:21.467520 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:21.468622 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:21.487627 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:21.541430 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:21.967295 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:21.967307 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:21.987090 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:22.041604 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:22.467429 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:22.471332 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:22.487735 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:22.541900 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:22.966311 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:22.968052 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:22.988245 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:23.042392 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:23.467397 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:23.468586 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:23.487755 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:23.568183 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:23.966834 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:23.968042 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:23.987813 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:24.041695 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:24.467268 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:24.467575 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:24.487456 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:24.541389 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:24.966701 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:24.968200 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:24.987129 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:25.041792 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:25.467032 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:25.467630 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:25.487508 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:25.541337 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:25.967313 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:25.968384 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:25.987641 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:26.041487 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:26.473463 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:26.473551 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:26.488988 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:26.542065 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:26.966584 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:26.967859 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:26.988690 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:27.041101 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:27.468243 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:27.468267 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:27.488238 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:27.541262 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:27.967662 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:27.967710 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:27.987648 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:28.041058 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:28.467298 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:28.467956 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:28.488617 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:28.568296 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:28.966938 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:28.968111 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:28.987982 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:29.041987 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:29.466682 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:29.467899 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:29.488311 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:29.541016 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:29.967240 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:29.968123 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:29.987789 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:30.041789 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:30.466523 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:30.467817 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:30.487758 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:30.541455 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:30.967383 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:30.968502 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:30.987892 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:31.041663 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:31.467340 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:31.468178 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:31.487481 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:31.541132 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:31.966983 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:31.968466 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:31.988149 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:32.042403 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:32.467720 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:32.468820 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:32.568034 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:32.568145 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:32.966717 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:32.968200 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:32.987661 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:33.041493 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:33.467449 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:33.468497 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:33.488201 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:33.541891 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:33.967273 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:33.967334 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:33.987440 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:34.041067 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:34.467724 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:34.467731 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:34.488370 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:34.541451 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:34.967125 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:34.968958 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:34.988174 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:35.042302 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:35.467649 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:35.468475 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:35.568238 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:35.568345 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:35.967658 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:35.968399 1674764 kapi.go:107] duration metric: took 44.003458125s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 11:16:35.988151 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:36.042236 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:36.467244 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:36.568123 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:36.568488 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:36.966474 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:36.987492 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:37.041174 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:37.467156 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:37.488774 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:37.541864 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:37.967906 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:37.988075 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:38.041705 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:38.467082 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:38.488651 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:38.541500 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:38.967169 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:38.988108 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:39.042689 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:39.467090 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:39.488595 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:39.541268 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:39.967424 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:39.987319 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:40.040784 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:40.466933 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:40.488678 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:40.567817 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:40.967446 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:40.987599 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:41.041906 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:41.466392 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:41.487367 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:41.541016 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:41.966859 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:41.987658 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:42.041581 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:42.466845 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:42.567882 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:42.567944 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:42.967129 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:43.067898 1674764 kapi.go:107] duration metric: took 44.529885953s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 11:16:43.068760 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:43.071213 1674764 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-767877 cluster.
	I1217 11:16:43.072769 1674764 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 11:16:43.074331 1674764 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 11:16:43.466731 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:43.489493 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:43.968131 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:43.988795 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:44.467197 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:44.487679 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:44.967778 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:44.988131 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:45.467424 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:45.488014 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:45.968226 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:45.987856 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:46.467751 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:46.488516 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:46.966937 1674764 kapi.go:107] duration metric: took 55.003833961s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 11:16:46.987769 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:47.488204 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:48.000423 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:48.488206 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:48.988276 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:49.488205 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:49.988089 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:50.488711 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:50.987396 1674764 kapi.go:107] duration metric: took 58.503448344s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 11:16:50.989012 1674764 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, amd-gpu-device-plugin, registry-creds, default-storageclass, ingress-dns, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1217 11:16:50.990175 1674764 addons.go:530] duration metric: took 1m0.782260551s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner amd-gpu-device-plugin registry-creds default-storageclass ingress-dns inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1217 11:16:50.990240 1674764 start.go:247] waiting for cluster config update ...
	I1217 11:16:50.990279 1674764 start.go:256] writing updated cluster config ...
	I1217 11:16:50.990621 1674764 ssh_runner.go:195] Run: rm -f paused
	I1217 11:16:50.994841 1674764 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:16:50.998057 1674764 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bk7js" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.002677 1674764 pod_ready.go:94] pod "coredns-66bc5c9577-bk7js" is "Ready"
	I1217 11:16:51.002706 1674764 pod_ready.go:86] duration metric: took 4.62363ms for pod "coredns-66bc5c9577-bk7js" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.004751 1674764 pod_ready.go:83] waiting for pod "etcd-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.008726 1674764 pod_ready.go:94] pod "etcd-addons-767877" is "Ready"
	I1217 11:16:51.008748 1674764 pod_ready.go:86] duration metric: took 3.974452ms for pod "etcd-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.010693 1674764 pod_ready.go:83] waiting for pod "kube-apiserver-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.014453 1674764 pod_ready.go:94] pod "kube-apiserver-addons-767877" is "Ready"
	I1217 11:16:51.014474 1674764 pod_ready.go:86] duration metric: took 3.762696ms for pod "kube-apiserver-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.016703 1674764 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.399436 1674764 pod_ready.go:94] pod "kube-controller-manager-addons-767877" is "Ready"
	I1217 11:16:51.399467 1674764 pod_ready.go:86] duration metric: took 382.738741ms for pod "kube-controller-manager-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.598579 1674764 pod_ready.go:83] waiting for pod "kube-proxy-dmglt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.999665 1674764 pod_ready.go:94] pod "kube-proxy-dmglt" is "Ready"
	I1217 11:16:51.999701 1674764 pod_ready.go:86] duration metric: took 401.089361ms for pod "kube-proxy-dmglt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:52.199105 1674764 pod_ready.go:83] waiting for pod "kube-scheduler-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:52.598472 1674764 pod_ready.go:94] pod "kube-scheduler-addons-767877" is "Ready"
	I1217 11:16:52.598507 1674764 pod_ready.go:86] duration metric: took 399.370525ms for pod "kube-scheduler-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:52.598522 1674764 pod_ready.go:40] duration metric: took 1.603645348s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:16:52.648093 1674764 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:16:52.650313 1674764 out.go:179] * Done! kubectl is now configured to use "addons-767877" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 11:18:23 addons-767877 crio[774]: time="2025-12-17T11:18:23.286263857Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-crd5v/registry-creds" id=6d1734f8-6194-4d3f-88d0-66557629a79b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:18:23 addons-767877 crio[774]: time="2025-12-17T11:18:23.286405096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:18:23 addons-767877 crio[774]: time="2025-12-17T11:18:23.29231552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:18:23 addons-767877 crio[774]: time="2025-12-17T11:18:23.292848071Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:18:23 addons-767877 crio[774]: time="2025-12-17T11:18:23.32544639Z" level=info msg="Created container 0d90bdfc594e140226a6c9d6106073b4b6a671559a9879c0c8ff17fddab344ef: kube-system/registry-creds-764b6fb674-crd5v/registry-creds" id=6d1734f8-6194-4d3f-88d0-66557629a79b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:18:23 addons-767877 crio[774]: time="2025-12-17T11:18:23.326243835Z" level=info msg="Starting container: 0d90bdfc594e140226a6c9d6106073b4b6a671559a9879c0c8ff17fddab344ef" id=eee3abb6-3c18-42f4-b8b7-8f1c94ef008b name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:18:23 addons-767877 crio[774]: time="2025-12-17T11:18:23.328517254Z" level=info msg="Started container" PID=8890 containerID=0d90bdfc594e140226a6c9d6106073b4b6a671559a9879c0c8ff17fddab344ef description=kube-system/registry-creds-764b6fb674-crd5v/registry-creds id=eee3abb6-3c18-42f4-b8b7-8f1c94ef008b name=/runtime.v1.RuntimeService/StartContainer sandboxID=39f2236174342ad0a846ff35feebcbfd309146903ce3e01b5c83b3bbbe15e7a0
	Dec 17 11:18:44 addons-767877 crio[774]: time="2025-12-17T11:18:44.607362366Z" level=info msg="Stopping pod sandbox: 92ae72a7c7c3f3eded8ba451f27e2ecfb10040fe4f9f4e72c88ae3f1e33e7ec1" id=8201855c-4d21-4158-8686-6d04b5eedb3c name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 11:18:44 addons-767877 crio[774]: time="2025-12-17T11:18:44.607441798Z" level=info msg="Stopped pod sandbox (already stopped): 92ae72a7c7c3f3eded8ba451f27e2ecfb10040fe4f9f4e72c88ae3f1e33e7ec1" id=8201855c-4d21-4158-8686-6d04b5eedb3c name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 11:18:44 addons-767877 crio[774]: time="2025-12-17T11:18:44.607932591Z" level=info msg="Removing pod sandbox: 92ae72a7c7c3f3eded8ba451f27e2ecfb10040fe4f9f4e72c88ae3f1e33e7ec1" id=1248ea06-db93-4bce-953b-376d1274d04b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 17 11:18:44 addons-767877 crio[774]: time="2025-12-17T11:18:44.613771693Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:18:44 addons-767877 crio[774]: time="2025-12-17T11:18:44.613841731Z" level=info msg="Removed pod sandbox: 92ae72a7c7c3f3eded8ba451f27e2ecfb10040fe4f9f4e72c88ae3f1e33e7ec1" id=1248ea06-db93-4bce-953b-376d1274d04b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.886639631Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-9g75z/POD" id=bc9fe3e0-b4db-4fb1-b4d1-8cc254ab7be9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.886716989Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.894257728Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9g75z Namespace:default ID:49337e836eb29cdffd3a31a89a2129ebfc990f44b8968c2631d92740cc9979f0 UID:e94b17b8-f693-48a4-89f3-acad0a74bb1d NetNS:/var/run/netns/2056081c-7a72-4d58-b765-075ece593fc1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001cd048}] Aliases:map[]}"
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.894299669Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-9g75z to CNI network \"kindnet\" (type=ptp)"
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.904645952Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9g75z Namespace:default ID:49337e836eb29cdffd3a31a89a2129ebfc990f44b8968c2631d92740cc9979f0 UID:e94b17b8-f693-48a4-89f3-acad0a74bb1d NetNS:/var/run/netns/2056081c-7a72-4d58-b765-075ece593fc1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001cd048}] Aliases:map[]}"
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.90479579Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-9g75z for CNI network kindnet (type=ptp)"
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.905837515Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.907121198Z" level=info msg="Ran pod sandbox 49337e836eb29cdffd3a31a89a2129ebfc990f44b8968c2631d92740cc9979f0 with infra container: default/hello-world-app-5d498dc89-9g75z/POD" id=bc9fe3e0-b4db-4fb1-b4d1-8cc254ab7be9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.908495298Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b8bb0fb2-a750-4c60-9934-7e6a2c710c92 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.908664236Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=b8bb0fb2-a750-4c60-9934-7e6a2c710c92 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.908712915Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=b8bb0fb2-a750-4c60-9934-7e6a2c710c92 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.909340714Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=5d4ce9c4-e8ac-4681-955f-30064cbe3096 name=/runtime.v1.ImageService/PullImage
	Dec 17 11:19:33 addons-767877 crio[774]: time="2025-12-17T11:19:33.918932831Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	0d90bdfc594e1       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   39f2236174342       registry-creds-764b6fb674-crd5v             kube-system
	5772d128310e6       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago        Running             nginx                                    0                   0b2f1d9d743bd       nginx                                       default
	f15ef20302864       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   7a852f2314b40       busybox                                     default
	960e339dfeb9d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                    kube-system
	45dced8160416       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                    kube-system
	fa2ebcf83b879       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                    kube-system
	49ba8a4cf9b16       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                    kube-system
	4b8f30633c332       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                    kube-system
	7ac64d7ceb05c       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago        Running             controller                               0                   7fb8626a1771c       ingress-nginx-controller-85d4c799dd-z2vvn   ingress-nginx
	c6b8c1b1547d4       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             2 minutes ago        Exited              patch                                    2                   3b3e4c3bd47db       ingress-nginx-admission-patch-6dj9n         ingress-nginx
	25d633e9324b0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   63c06dbe6c307       gcp-auth-78565c9fb4-cbs85                   gcp-auth
	7fce542d2390c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago        Running             gadget                                   0                   8d456ea4642ab       gadget-8cr2g                                gadget
	29bb23388cfae       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago        Running             csi-external-health-monitor-controller   0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                    kube-system
	2019334dda3cb       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago        Running             registry-proxy                           0                   ee3b8ab6c4ac3       registry-proxy-ffwc5                        kube-system
	a039ab85e94e7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   47590040e4a77       snapshot-controller-7d9fbc56b8-dm88z        kube-system
	6e7d200d76d2d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago        Exited              create                                   0                   0fb1ffad3df66       ingress-nginx-admission-create-5zpxl        ingress-nginx
	743ec64dbbba0       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   9551622c5fad2       amd-gpu-device-plugin-54g7h                 kube-system
	27d01bff29030       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   b6acc084613f2       snapshot-controller-7d9fbc56b8-2jdlm        kube-system
	85d34444a3d52       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   cb12959d3c61a       csi-hostpath-resizer-0                      kube-system
	b486bd1049fb4       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   1653853ccb50d       nvidia-device-plugin-daemonset-29qcw        kube-system
	7894f028137e7       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   c3d5551c84036       registry-6b586f9694-lc6z2                   kube-system
	7ffd867e55d61       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              3 minutes ago        Running             yakd                                     0                   3d142fa0be2dc       yakd-dashboard-6654c87f9b-bb445             yakd-dashboard
	f0b0e753b5cc2       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   29c170e3069ba       local-path-provisioner-648f6765c9-wwkwd     local-path-storage
	a9e2a2f02ae68       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   ffdf6ba701f8b       csi-hostpath-attacher-0                     kube-system
	99be406f81626       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   90196c535089c       kube-ingress-dns-minikube                   kube-system
	3d6dc27d27364       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   58675b31b34b0       metrics-server-85b7d694d7-q89cn             kube-system
	f9330c6d46d57       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago        Running             cloud-spanner-emulator                   0                   95e517497e177       cloud-spanner-emulator-5bdddb765-v9nvg      default
	d7add53e16ff4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   742aaf64d4d43       coredns-66bc5c9577-bk7js                    kube-system
	710c232068b61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   c5f4de2b5aafd       storage-provisioner                         kube-system
	27822a03994e6       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           3 minutes ago        Running             kindnet-cni                              0                   e5f8a0b22f605       kindnet-nkfjh                               kube-system
	e8a13ad739d84       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             3 minutes ago        Running             kube-proxy                               0                   db9b88d602258       kube-proxy-dmglt                            kube-system
	9f8c99a2db49b       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             3 minutes ago        Running             kube-controller-manager                  0                   46ffae26bdc9f       kube-controller-manager-addons-767877       kube-system
	d01e74fe7a95c       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             3 minutes ago        Running             kube-apiserver                           0                   989b7a46df257       kube-apiserver-addons-767877                kube-system
	f965996f6131f       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             3 minutes ago        Running             kube-scheduler                           0                   01c99f82da5ab       kube-scheduler-addons-767877                kube-system
	59bd12719079a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             3 minutes ago        Running             etcd                                     0                   7ed567bb5f749       etcd-addons-767877                          kube-system
	
	
	==> coredns [d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370] <==
	[INFO] 10.244.0.21:39302 - 49541 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104934s
	[INFO] 10.244.0.21:36984 - 44461 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005624542s
	[INFO] 10.244.0.21:53572 - 1653 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006141938s
	[INFO] 10.244.0.21:43092 - 41379 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004756868s
	[INFO] 10.244.0.21:37902 - 6926 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006645864s
	[INFO] 10.244.0.21:40449 - 40331 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004319917s
	[INFO] 10.244.0.21:55291 - 14990 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007435945s
	[INFO] 10.244.0.21:38506 - 41843 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000928567s
	[INFO] 10.244.0.21:41693 - 17276 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002469617s
	[INFO] 10.244.0.26:38811 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000212436s
	[INFO] 10.244.0.26:49476 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000196459s
	[INFO] 10.244.0.31:45323 - 55880 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000305427s
	[INFO] 10.244.0.31:46091 - 8825 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000372902s
	[INFO] 10.244.0.31:46006 - 48824 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000160482s
	[INFO] 10.244.0.31:35573 - 17308 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000201904s
	[INFO] 10.244.0.31:34988 - 29503 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00016045s
	[INFO] 10.244.0.31:56793 - 25469 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000218241s
	[INFO] 10.244.0.31:48867 - 29642 "AAAA IN accounts.google.com.europe-west1-b.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005165296s
	[INFO] 10.244.0.31:56557 - 62140 "A IN accounts.google.com.europe-west1-b.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005265307s
	[INFO] 10.244.0.31:55556 - 63560 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005700192s
	[INFO] 10.244.0.31:44400 - 56249 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005801421s
	[INFO] 10.244.0.31:43850 - 3191 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004447269s
	[INFO] 10.244.0.31:39089 - 38559 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004650666s
	[INFO] 10.244.0.31:41716 - 13376 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001913947s
	[INFO] 10.244.0.31:37290 - 58882 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.002089487s
	
	
	==> describe nodes <==
	Name:               addons-767877
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-767877
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=addons-767877
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_15_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-767877
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-767877"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:15:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-767877
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:19:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:18:48 +0000   Wed, 17 Dec 2025 11:15:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:18:48 +0000   Wed, 17 Dec 2025 11:15:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:18:48 +0000   Wed, 17 Dec 2025 11:15:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:18:48 +0000   Wed, 17 Dec 2025 11:16:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-767877
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                6f0cef8c-aca7-4308-b71d-bd92de2642d5
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  default                     cloud-spanner-emulator-5bdddb765-v9nvg       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  default                     hello-world-app-5d498dc89-9g75z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-8cr2g                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  gcp-auth                    gcp-auth-78565c9fb4-cbs85                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-z2vvn    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m44s
	  kube-system                 amd-gpu-device-plugin-54g7h                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 coredns-66bc5c9577-bk7js                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m45s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 csi-hostpathplugin-swlsr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 etcd-addons-767877                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m51s
	  kube-system                 kindnet-nkfjh                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m46s
	  kube-system                 kube-apiserver-addons-767877                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 kube-controller-manager-addons-767877        200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 kube-proxy-dmglt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 kube-scheduler-addons-767877                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 metrics-server-85b7d694d7-q89cn              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m44s
	  kube-system                 nvidia-device-plugin-daemonset-29qcw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 registry-6b586f9694-lc6z2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 registry-creds-764b6fb674-crd5v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 registry-proxy-ffwc5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 snapshot-controller-7d9fbc56b8-2jdlm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 snapshot-controller-7d9fbc56b8-dm88z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  local-path-storage          local-path-provisioner-648f6765c9-wwkwd      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-bb445              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  Starting                 3m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s (x8 over 3m56s)  kubelet          Node addons-767877 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x8 over 3m56s)  kubelet          Node addons-767877 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x8 over 3m56s)  kubelet          Node addons-767877 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s                  kubelet          Node addons-767877 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s                  kubelet          Node addons-767877 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s                  kubelet          Node addons-767877 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m47s                  node-controller  Node addons-767877 event: Registered Node addons-767877 in Controller
	  Normal  NodeReady                3m32s                  kubelet          Node addons-767877 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875] <==
	{"level":"warn","ts":"2025-12-17T11:15:41.348076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.354801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.361609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.368368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.374871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.382579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.389425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.396752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.404284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.411761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.418341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.432478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.439224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.446062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.497513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:53.013917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:53.020793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:16:15.701617Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.290066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:16:15.701725Z","caller":"traceutil/trace.go:172","msg":"trace[1360142914] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1008; }","duration":"127.411285ms","start":"2025-12-17T11:16:15.574298Z","end":"2025-12-17T11:16:15.701709Z","steps":["trace[1360142914] 'range keys from in-memory index tree'  (duration: 127.201575ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:16:15.812520Z","caller":"traceutil/trace.go:172","msg":"trace[2089671935] transaction","detail":"{read_only:false; response_revision:1009; number_of_response:1; }","duration":"102.281535ms","start":"2025-12-17T11:16:15.710219Z","end":"2025-12-17T11:16:15.812500Z","steps":["trace[2089671935] 'process raft request'  (duration: 102.143069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T11:16:18.939917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:16:18.946507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:16:18.962085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:16:18.969440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T11:16:48.206890Z","caller":"traceutil/trace.go:172","msg":"trace[1891087675] transaction","detail":"{read_only:false; response_revision:1219; number_of_response:1; }","duration":"164.272878ms","start":"2025-12-17T11:16:48.042597Z","end":"2025-12-17T11:16:48.206870Z","steps":["trace[1891087675] 'process raft request'  (duration: 100.622087ms)","trace[1891087675] 'compare'  (duration: 63.540334ms)"],"step_count":2}
	
	
	==> gcp-auth [25d633e9324b084a2402d79743b4434abf41696f0d1cab8205102f1e7493f3dd] <==
	2025/12/17 11:16:42 GCP Auth Webhook started!
	2025/12/17 11:16:53 Ready to marshal response ...
	2025/12/17 11:16:53 Ready to write response ...
	2025/12/17 11:16:53 Ready to marshal response ...
	2025/12/17 11:16:53 Ready to write response ...
	2025/12/17 11:16:53 Ready to marshal response ...
	2025/12/17 11:16:53 Ready to write response ...
	2025/12/17 11:17:08 Ready to marshal response ...
	2025/12/17 11:17:08 Ready to write response ...
	2025/12/17 11:17:08 Ready to marshal response ...
	2025/12/17 11:17:08 Ready to write response ...
	2025/12/17 11:17:08 Ready to marshal response ...
	2025/12/17 11:17:08 Ready to write response ...
	2025/12/17 11:17:11 Ready to marshal response ...
	2025/12/17 11:17:11 Ready to write response ...
	2025/12/17 11:17:20 Ready to marshal response ...
	2025/12/17 11:17:20 Ready to write response ...
	2025/12/17 11:17:29 Ready to marshal response ...
	2025/12/17 11:17:29 Ready to write response ...
	2025/12/17 11:17:42 Ready to marshal response ...
	2025/12/17 11:17:42 Ready to write response ...
	2025/12/17 11:19:33 Ready to marshal response ...
	2025/12/17 11:19:33 Ready to write response ...
	
	
	==> kernel <==
	 11:19:35 up  5:01,  0 user,  load average: 0.31, 0.63, 1.09
	Linux addons-767877 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766] <==
	I1217 11:17:33.262370       1 main.go:301] handling current node
	I1217 11:17:43.261773       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:17:43.261812       1 main.go:301] handling current node
	I1217 11:17:53.262109       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:17:53.262141       1 main.go:301] handling current node
	I1217 11:18:03.267257       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:18:03.267308       1 main.go:301] handling current node
	I1217 11:18:13.271382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:18:13.271427       1 main.go:301] handling current node
	I1217 11:18:23.262902       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:18:23.262947       1 main.go:301] handling current node
	I1217 11:18:33.262801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:18:33.262840       1 main.go:301] handling current node
	I1217 11:18:43.262232       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:18:43.262286       1 main.go:301] handling current node
	I1217 11:18:53.262080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:18:53.262119       1 main.go:301] handling current node
	I1217 11:19:03.261877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:19:03.261940       1 main.go:301] handling current node
	I1217 11:19:13.270796       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:19:13.270856       1 main.go:301] handling current node
	I1217 11:19:23.261844       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:19:23.261887       1 main.go:301] handling current node
	I1217 11:19:33.268910       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:19:33.268944       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc] <==
	W1217 11:16:12.687259       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 11:16:12.687293       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1217 11:16:12.687307       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1217 11:16:12.687317       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1217 11:16:12.688452       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1217 11:16:16.695896       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 11:16:16.695965       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 11:16:16.695972       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.1.140:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.1.140:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1217 11:16:16.704298       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1217 11:16:18.939802       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 11:16:18.946458       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 11:16:18.961994       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 11:16:18.969438       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1217 11:17:01.349560       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60660: use of closed network connection
	E1217 11:17:01.514460       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60690: use of closed network connection
	I1217 11:17:08.320228       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 11:17:08.547301       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.173.226"}
	I1217 11:17:35.166607       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1217 11:19:33.647651       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.45.11"}
	
	
	==> kube-controller-manager [9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a] <==
	I1217 11:15:48.924954       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 11:15:48.925002       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 11:15:48.925385       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 11:15:48.925405       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 11:15:48.925436       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 11:15:48.925487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 11:15:48.925564       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 11:15:48.925645       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 11:15:48.925658       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 11:15:48.925788       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 11:15:48.925796       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 11:15:48.926007       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 11:15:48.926138       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 11:15:48.926702       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 11:15:48.927696       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 11:15:48.929011       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 11:15:48.929702       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:15:48.947563       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 11:15:51.513421       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1217 11:16:03.926927       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1217 11:16:18.933939       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 11:16:18.934004       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 11:16:18.956166       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 11:16:19.034635       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:16:19.056856       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989] <==
	I1217 11:15:50.593470       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:15:50.800615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:15:51.006495       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:15:51.006564       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 11:15:51.006681       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:15:51.407580       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:15:51.407778       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:15:51.431965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:15:51.432708       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:15:51.433053       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:15:51.445255       1 config.go:200] "Starting service config controller"
	I1217 11:15:51.445284       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:15:51.445366       1 config.go:309] "Starting node config controller"
	I1217 11:15:51.445384       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:15:51.445429       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:15:51.445436       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:15:51.445457       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:15:51.445463       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:15:51.551780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:15:51.551825       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:15:51.551838       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:15:51.554122       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86] <==
	E1217 11:15:41.946769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 11:15:41.946803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 11:15:41.946862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:15:41.946908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:15:41.946957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:15:41.946962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:15:41.947028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 11:15:41.947085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 11:15:41.947094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 11:15:42.826997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:15:42.841598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 11:15:42.849721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:15:42.859193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:15:42.881669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 11:15:42.931846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 11:15:42.965928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 11:15:42.987138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 11:15:43.019464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 11:15:43.052439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 11:15:43.106614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 11:15:43.156344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 11:15:43.230691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 11:15:43.252210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:15:43.274608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1217 11:15:46.043459       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:17:44 addons-767877 kubelet[1307]: I1217 11:17:44.552693    1307 scope.go:117] "RemoveContainer" containerID="b9d0b96855eaec6b66792566bf9e6d950697960487665a578da0c2b6fc93d6d6"
	Dec 17 11:17:48 addons-767877 kubelet[1307]: I1217 11:17:48.941462    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^012fc6f9-db3a-11f0-9005-0e6fbc53eacc\") pod \"d1720e3e-3612-4821-a8d5-a964ffe3e949\" (UID: \"d1720e3e-3612-4821-a8d5-a964ffe3e949\") "
	Dec 17 11:17:48 addons-767877 kubelet[1307]: I1217 11:17:48.941527    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d1720e3e-3612-4821-a8d5-a964ffe3e949-gcp-creds\") pod \"d1720e3e-3612-4821-a8d5-a964ffe3e949\" (UID: \"d1720e3e-3612-4821-a8d5-a964ffe3e949\") "
	Dec 17 11:17:48 addons-767877 kubelet[1307]: I1217 11:17:48.941624    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wv9x\" (UniqueName: \"kubernetes.io/projected/d1720e3e-3612-4821-a8d5-a964ffe3e949-kube-api-access-6wv9x\") pod \"d1720e3e-3612-4821-a8d5-a964ffe3e949\" (UID: \"d1720e3e-3612-4821-a8d5-a964ffe3e949\") "
	Dec 17 11:17:48 addons-767877 kubelet[1307]: I1217 11:17:48.941625    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1720e3e-3612-4821-a8d5-a964ffe3e949-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "d1720e3e-3612-4821-a8d5-a964ffe3e949" (UID: "d1720e3e-3612-4821-a8d5-a964ffe3e949"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 17 11:17:48 addons-767877 kubelet[1307]: I1217 11:17:48.941790    1307 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d1720e3e-3612-4821-a8d5-a964ffe3e949-gcp-creds\") on node \"addons-767877\" DevicePath \"\""
	Dec 17 11:17:48 addons-767877 kubelet[1307]: I1217 11:17:48.944086    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1720e3e-3612-4821-a8d5-a964ffe3e949-kube-api-access-6wv9x" (OuterVolumeSpecName: "kube-api-access-6wv9x") pod "d1720e3e-3612-4821-a8d5-a964ffe3e949" (UID: "d1720e3e-3612-4821-a8d5-a964ffe3e949"). InnerVolumeSpecName "kube-api-access-6wv9x". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 11:17:48 addons-767877 kubelet[1307]: I1217 11:17:48.944996    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^012fc6f9-db3a-11f0-9005-0e6fbc53eacc" (OuterVolumeSpecName: "task-pv-storage") pod "d1720e3e-3612-4821-a8d5-a964ffe3e949" (UID: "d1720e3e-3612-4821-a8d5-a964ffe3e949"). InnerVolumeSpecName "pvc-a89abf99-ca61-475b-bc3b-502ff577cdab". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 17 11:17:49 addons-767877 kubelet[1307]: I1217 11:17:49.042184    1307 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-a89abf99-ca61-475b-bc3b-502ff577cdab\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^012fc6f9-db3a-11f0-9005-0e6fbc53eacc\") on node \"addons-767877\" "
	Dec 17 11:17:49 addons-767877 kubelet[1307]: I1217 11:17:49.042234    1307 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6wv9x\" (UniqueName: \"kubernetes.io/projected/d1720e3e-3612-4821-a8d5-a964ffe3e949-kube-api-access-6wv9x\") on node \"addons-767877\" DevicePath \"\""
	Dec 17 11:17:49 addons-767877 kubelet[1307]: I1217 11:17:49.046917    1307 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-a89abf99-ca61-475b-bc3b-502ff577cdab" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^012fc6f9-db3a-11f0-9005-0e6fbc53eacc") on node "addons-767877"
	Dec 17 11:17:49 addons-767877 kubelet[1307]: I1217 11:17:49.143347    1307 reconciler_common.go:299] "Volume detached for volume \"pvc-a89abf99-ca61-475b-bc3b-502ff577cdab\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^012fc6f9-db3a-11f0-9005-0e6fbc53eacc\") on node \"addons-767877\" DevicePath \"\""
	Dec 17 11:17:49 addons-767877 kubelet[1307]: I1217 11:17:49.158881    1307 scope.go:117] "RemoveContainer" containerID="6627880acf769d52106809b1c07ff522050e4040c24922b4b3f319f6e8322686"
	Dec 17 11:17:49 addons-767877 kubelet[1307]: I1217 11:17:49.168605    1307 scope.go:117] "RemoveContainer" containerID="6627880acf769d52106809b1c07ff522050e4040c24922b4b3f319f6e8322686"
	Dec 17 11:17:49 addons-767877 kubelet[1307]: E1217 11:17:49.169163    1307 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6627880acf769d52106809b1c07ff522050e4040c24922b4b3f319f6e8322686\": container with ID starting with 6627880acf769d52106809b1c07ff522050e4040c24922b4b3f319f6e8322686 not found: ID does not exist" containerID="6627880acf769d52106809b1c07ff522050e4040c24922b4b3f319f6e8322686"
	Dec 17 11:17:49 addons-767877 kubelet[1307]: I1217 11:17:49.169216    1307 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6627880acf769d52106809b1c07ff522050e4040c24922b4b3f319f6e8322686"} err="failed to get container status \"6627880acf769d52106809b1c07ff522050e4040c24922b4b3f319f6e8322686\": rpc error: code = NotFound desc = could not find container \"6627880acf769d52106809b1c07ff522050e4040c24922b4b3f319f6e8322686\": container with ID starting with 6627880acf769d52106809b1c07ff522050e4040c24922b4b3f319f6e8322686 not found: ID does not exist"
	Dec 17 11:17:50 addons-767877 kubelet[1307]: I1217 11:17:50.532110    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1720e3e-3612-4821-a8d5-a964ffe3e949" path="/var/lib/kubelet/pods/d1720e3e-3612-4821-a8d5-a964ffe3e949/volumes"
	Dec 17 11:17:57 addons-767877 kubelet[1307]: I1217 11:17:57.529153    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ffwc5" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 11:18:06 addons-767877 kubelet[1307]: E1217 11:18:06.854355    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-crd5v" podUID="ae965757-a78e-4e4e-b450-21f182854184"
	Dec 17 11:18:47 addons-767877 kubelet[1307]: I1217 11:18:47.529601    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-29qcw" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 11:18:56 addons-767877 kubelet[1307]: I1217 11:18:56.529405    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-54g7h" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 11:19:20 addons-767877 kubelet[1307]: I1217 11:19:20.529184    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ffwc5" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 11:19:33 addons-767877 kubelet[1307]: I1217 11:19:33.573756    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-crd5v" podStartSLOduration=221.878077384 podStartE2EDuration="3m43.57372817s" podCreationTimestamp="2025-12-17 11:15:50 +0000 UTC" firstStartedPulling="2025-12-17 11:18:21.554836999 +0000 UTC m=+157.113510692" lastFinishedPulling="2025-12-17 11:18:23.250487787 +0000 UTC m=+158.809161478" observedRunningTime="2025-12-17 11:18:24.322722403 +0000 UTC m=+159.881396104" watchObservedRunningTime="2025-12-17 11:19:33.57372817 +0000 UTC m=+229.132401871"
	Dec 17 11:19:33 addons-767877 kubelet[1307]: I1217 11:19:33.660849    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8pxz\" (UniqueName: \"kubernetes.io/projected/e94b17b8-f693-48a4-89f3-acad0a74bb1d-kube-api-access-d8pxz\") pod \"hello-world-app-5d498dc89-9g75z\" (UID: \"e94b17b8-f693-48a4-89f3-acad0a74bb1d\") " pod="default/hello-world-app-5d498dc89-9g75z"
	Dec 17 11:19:33 addons-767877 kubelet[1307]: I1217 11:19:33.660920    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e94b17b8-f693-48a4-89f3-acad0a74bb1d-gcp-creds\") pod \"hello-world-app-5d498dc89-9g75z\" (UID: \"e94b17b8-f693-48a4-89f3-acad0a74bb1d\") " pod="default/hello-world-app-5d498dc89-9g75z"
	
	
	==> storage-provisioner [710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa] <==
	W1217 11:19:11.155645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:13.159381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:13.165351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:15.169660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:15.174236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:17.178048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:17.183666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:19.187517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:19.192215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:21.195954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:21.202762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:23.205863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:23.211007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:25.215460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:25.220013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:27.223602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:27.228823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:29.232552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:29.238344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:31.242426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:31.248627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:33.251998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:33.256515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:35.260499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:35.265994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-767877 -n addons-767877
helpers_test.go:270: (dbg) Run:  kubectl --context addons-767877 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-5zpxl ingress-nginx-admission-patch-6dj9n
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-767877 describe pod ingress-nginx-admission-create-5zpxl ingress-nginx-admission-patch-6dj9n
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-767877 describe pod ingress-nginx-admission-create-5zpxl ingress-nginx-admission-patch-6dj9n: exit status 1 (64.278772ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5zpxl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6dj9n" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-767877 describe pod ingress-nginx-admission-create-5zpxl ingress-nginx-admission-patch-6dj9n: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (279.956901ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:19:36.219848 1689559 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:19:36.220176 1689559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:19:36.220190 1689559 out.go:374] Setting ErrFile to fd 2...
	I1217 11:19:36.220195 1689559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:19:36.220479 1689559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:19:36.220847 1689559 mustload.go:66] Loading cluster: addons-767877
	I1217 11:19:36.221264 1689559 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:19:36.221288 1689559 addons.go:622] checking whether the cluster is paused
	I1217 11:19:36.221399 1689559 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:19:36.221416 1689559 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:19:36.221910 1689559 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:19:36.244612 1689559 ssh_runner.go:195] Run: systemctl --version
	I1217 11:19:36.244690 1689559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:19:36.267202 1689559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:19:36.363332 1689559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:19:36.363432 1689559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:19:36.399095 1689559 cri.go:89] found id: "0d90bdfc594e140226a6c9d6106073b4b6a671559a9879c0c8ff17fddab344ef"
	I1217 11:19:36.399128 1689559 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:19:36.399133 1689559 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:19:36.399136 1689559 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:19:36.399139 1689559 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:19:36.399143 1689559 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:19:36.399146 1689559 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:19:36.399149 1689559 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:19:36.399152 1689559 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:19:36.399163 1689559 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:19:36.399166 1689559 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:19:36.399168 1689559 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:19:36.399171 1689559 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:19:36.399174 1689559 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:19:36.399177 1689559 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:19:36.399184 1689559 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:19:36.399188 1689559 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:19:36.399192 1689559 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:19:36.399195 1689559 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:19:36.399198 1689559 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:19:36.399200 1689559 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:19:36.399203 1689559 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:19:36.399205 1689559 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:19:36.399208 1689559 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:19:36.399211 1689559 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:19:36.399214 1689559 cri.go:89] found id: ""
	I1217 11:19:36.399270 1689559 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:19:36.417398 1689559 out.go:203] 
	W1217 11:19:36.419059 1689559 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:19:36.419093 1689559 out.go:285] * 
	* 
	W1217 11:19:36.425557 1689559 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:19:36.427792 1689559 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable ingress --alsologtostderr -v=1: exit status 11 (269.133039ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:19:36.501317 1689621 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:19:36.501458 1689621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:19:36.501465 1689621 out.go:374] Setting ErrFile to fd 2...
	I1217 11:19:36.501470 1689621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:19:36.501771 1689621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:19:36.502086 1689621 mustload.go:66] Loading cluster: addons-767877
	I1217 11:19:36.502476 1689621 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:19:36.502495 1689621 addons.go:622] checking whether the cluster is paused
	I1217 11:19:36.502607 1689621 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:19:36.502620 1689621 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:19:36.502982 1689621 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:19:36.523140 1689621 ssh_runner.go:195] Run: systemctl --version
	I1217 11:19:36.523213 1689621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:19:36.543301 1689621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:19:36.638876 1689621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:19:36.638963 1689621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:19:36.673314 1689621 cri.go:89] found id: "0d90bdfc594e140226a6c9d6106073b4b6a671559a9879c0c8ff17fddab344ef"
	I1217 11:19:36.673341 1689621 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:19:36.673347 1689621 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:19:36.673352 1689621 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:19:36.673357 1689621 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:19:36.673360 1689621 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:19:36.673363 1689621 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:19:36.673366 1689621 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:19:36.673368 1689621 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:19:36.673391 1689621 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:19:36.673397 1689621 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:19:36.673402 1689621 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:19:36.673406 1689621 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:19:36.673411 1689621 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:19:36.673415 1689621 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:19:36.673448 1689621 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:19:36.673457 1689621 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:19:36.673464 1689621 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:19:36.673467 1689621 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:19:36.673470 1689621 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:19:36.673475 1689621 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:19:36.673478 1689621 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:19:36.673480 1689621 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:19:36.673483 1689621 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:19:36.673487 1689621 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:19:36.673491 1689621 cri.go:89] found id: ""
	I1217 11:19:36.673559 1689621 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:19:36.689605 1689621 out.go:203] 
	W1217 11:19:36.691297 1689621 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:19:36.691319 1689621 out.go:285] * 
	* 
	W1217 11:19:36.697493 1689621 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:19:36.699141 1689621 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
I1217 11:17:01.787492 1672941 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
helpers_test.go:353: "gadget-8cr2g" [e3204263-3629-4614-80ae-2e7000e23528] Running
I1217 11:17:01.787513 1672941 kapi.go:107] duration metric: took 3.699358ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003685909s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (271.307279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:07.851298 1683163 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:07.851410 1683163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:07.851415 1683163 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:07.851419 1683163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:07.851644 1683163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:07.851971 1683163 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:07.852477 1683163 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:07.852498 1683163 addons.go:622] checking whether the cluster is paused
	I1217 11:17:07.852647 1683163 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:07.852670 1683163 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:07.853047 1683163 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:07.875839 1683163 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:07.875903 1683163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:07.898078 1683163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:07.994200 1683163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:07.994303 1683163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:08.026685 1683163 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:08.026706 1683163 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:08.026713 1683163 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:08.026717 1683163 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:08.026722 1683163 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:08.026729 1683163 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:08.026735 1683163 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:08.026740 1683163 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:08.026745 1683163 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:08.026754 1683163 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:08.026763 1683163 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:08.026771 1683163 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:08.026776 1683163 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:08.026780 1683163 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:08.026785 1683163 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:08.026794 1683163 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:08.026801 1683163 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:08.026808 1683163 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:08.026813 1683163 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:08.026818 1683163 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:08.026835 1683163 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:08.026842 1683163 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:08.026847 1683163 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:08.026851 1683163 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:08.026855 1683163 cri.go:89] found id: ""
	I1217 11:17:08.026889 1683163 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:08.042914 1683163 out.go:203] 
	W1217 11:17:08.044745 1683163 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:08.044786 1683163 out.go:285] * 
	* 
	W1217 11:17:08.050940 1683163 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:08.052439 1683163 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.047223ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-q89cn" [4fe34e06-742e-4967-a029-2bcdc2026e59] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004471766s
addons_test.go:465: (dbg) Run:  kubectl --context addons-767877 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (259.463411ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:33.183939 1686462 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:33.184182 1686462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:33.184191 1686462 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:33.184195 1686462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:33.184404 1686462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:33.184691 1686462 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:33.185041 1686462 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:33.185058 1686462 addons.go:622] checking whether the cluster is paused
	I1217 11:17:33.185139 1686462 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:33.185160 1686462 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:33.185593 1686462 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:33.203980 1686462 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:33.204033 1686462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:33.223056 1686462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:33.317252 1686462 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:33.317338 1686462 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:33.350770 1686462 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:33.350795 1686462 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:33.350799 1686462 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:33.350802 1686462 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:33.350804 1686462 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:33.350808 1686462 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:33.350811 1686462 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:33.350815 1686462 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:33.350820 1686462 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:33.350829 1686462 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:33.350834 1686462 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:33.350838 1686462 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:33.350852 1686462 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:33.350937 1686462 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:33.350962 1686462 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:33.350968 1686462 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:33.350971 1686462 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:33.350975 1686462 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:33.350978 1686462 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:33.350981 1686462 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:33.350983 1686462 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:33.350986 1686462 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:33.350989 1686462 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:33.350992 1686462 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:33.350995 1686462 cri.go:89] found id: ""
	I1217 11:17:33.351042 1686462 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:33.366908 1686462 out.go:203] 
	W1217 11:17:33.368183 1686462 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:33.368218 1686462 out.go:285] * 
	* 
	W1217 11:17:33.374482 1686462 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:33.376205 1686462 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.761438ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-767877 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-767877 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [3ef98908-1721-44c0-b17d-85b2751e6e4a] Pending
helpers_test.go:353: "task-pv-pod" [3ef98908-1721-44c0-b17d-85b2751e6e4a] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.004097568s
addons_test.go:574: (dbg) Run:  kubectl --context addons-767877 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-767877 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-767877 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-767877 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-767877 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-767877 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-767877 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [d1720e3e-3612-4821-a8d5-a964ffe3e949] Pending
helpers_test.go:353: "task-pv-pod-restore" [d1720e3e-3612-4821-a8d5-a964ffe3e949] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.004646047s
addons_test.go:616: (dbg) Run:  kubectl --context addons-767877 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-767877 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-767877 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (261.042569ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:49.580027 1687337 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:49.580322 1687337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:49.580338 1687337 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:49.580344 1687337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:49.580610 1687337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:49.580917 1687337 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:49.581235 1687337 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:49.581249 1687337 addons.go:622] checking whether the cluster is paused
	I1217 11:17:49.581348 1687337 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:49.581363 1687337 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:49.581840 1687337 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:49.601909 1687337 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:49.601970 1687337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:49.621305 1687337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:49.715903 1687337 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:49.715984 1687337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:49.748988 1687337 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:49.749016 1687337 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:49.749024 1687337 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:49.749029 1687337 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:49.749034 1687337 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:49.749039 1687337 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:49.749044 1687337 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:49.749050 1687337 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:49.749055 1687337 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:49.749072 1687337 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:49.749079 1687337 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:49.749082 1687337 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:49.749085 1687337 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:49.749088 1687337 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:49.749090 1687337 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:49.749098 1687337 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:49.749103 1687337 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:49.749108 1687337 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:49.749111 1687337 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:49.749113 1687337 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:49.749116 1687337 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:49.749119 1687337 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:49.749121 1687337 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:49.749124 1687337 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:49.749127 1687337 cri.go:89] found id: ""
	I1217 11:17:49.749168 1687337 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:49.765011 1687337 out.go:203] 
	W1217 11:17:49.766519 1687337 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:49.766556 1687337 out.go:285] * 
	* 
	W1217 11:17:49.772763 1687337 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:49.774559 1687337 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (263.177706ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:49.841319 1687400 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:49.841621 1687400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:49.841632 1687400 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:49.841639 1687400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:49.841873 1687400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:49.842171 1687400 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:49.842560 1687400 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:49.842584 1687400 addons.go:622] checking whether the cluster is paused
	I1217 11:17:49.842703 1687400 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:49.842720 1687400 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:49.843154 1687400 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:49.863660 1687400 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:49.863712 1687400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:49.883647 1687400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:49.978185 1687400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:49.978291 1687400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:50.011513 1687400 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:50.011573 1687400 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:50.011578 1687400 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:50.011583 1687400 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:50.011586 1687400 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:50.011590 1687400 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:50.011593 1687400 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:50.011596 1687400 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:50.011598 1687400 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:50.011616 1687400 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:50.011619 1687400 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:50.011622 1687400 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:50.011625 1687400 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:50.011635 1687400 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:50.011637 1687400 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:50.011644 1687400 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:50.011649 1687400 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:50.011654 1687400 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:50.011657 1687400 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:50.011659 1687400 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:50.011662 1687400 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:50.011664 1687400 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:50.011667 1687400 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:50.011669 1687400 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:50.011672 1687400 cri.go:89] found id: ""
	I1217 11:17:50.011729 1687400 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:50.028056 1687400 out.go:203] 
	W1217 11:17:50.029806 1687400 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:50.029829 1687400 out.go:285] * 
	* 
	W1217 11:17:50.036362 1687400 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:50.038015 1687400 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (48.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-767877 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-767877 --alsologtostderr -v=1: exit status 11 (256.749975ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:26.063225 1685432 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:26.063473 1685432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:26.063482 1685432 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:26.063487 1685432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:26.063707 1685432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:26.064033 1685432 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:26.064361 1685432 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:26.064376 1685432 addons.go:622] checking whether the cluster is paused
	I1217 11:17:26.064452 1685432 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:26.064464 1685432 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:26.064927 1685432 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:26.084151 1685432 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:26.084222 1685432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:26.102611 1685432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:26.196154 1685432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:26.196236 1685432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:26.228439 1685432 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:26.228482 1685432 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:26.228494 1685432 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:26.228500 1685432 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:26.228579 1685432 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:26.228611 1685432 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:26.228617 1685432 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:26.228625 1685432 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:26.228629 1685432 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:26.228756 1685432 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:26.228778 1685432 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:26.228782 1685432 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:26.228785 1685432 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:26.228788 1685432 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:26.228791 1685432 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:26.228812 1685432 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:26.228817 1685432 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:26.228822 1685432 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:26.228825 1685432 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:26.228828 1685432 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:26.228831 1685432 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:26.228833 1685432 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:26.228836 1685432 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:26.228839 1685432 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:26.228841 1685432 cri.go:89] found id: ""
	I1217 11:17:26.228901 1685432 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:26.244325 1685432 out.go:203] 
	W1217 11:17:26.245677 1685432 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:26.245702 1685432 out.go:285] * 
	* 
	W1217 11:17:26.251816 1685432 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:26.253577 1685432 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-767877 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-767877
helpers_test.go:244: (dbg) docker inspect addons-767877:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336",
	        "Created": "2025-12-17T11:15:26.334854184Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1675422,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:15:26.369931962Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336/hosts",
	        "LogPath": "/var/lib/docker/containers/1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336/1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336-json.log",
	        "Name": "/addons-767877",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-767877:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-767877",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ab21d83dadcbb788b2e832807aeec43a5cb6c2c62e27d3d1a391bace09d8336",
	                "LowerDir": "/var/lib/docker/overlay2/d686513fabf00340aa7f7cea53208b69d40d5068b9b68ab521ff6c994f6321e9-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d686513fabf00340aa7f7cea53208b69d40d5068b9b68ab521ff6c994f6321e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d686513fabf00340aa7f7cea53208b69d40d5068b9b68ab521ff6c994f6321e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d686513fabf00340aa7f7cea53208b69d40d5068b9b68ab521ff6c994f6321e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-767877",
	                "Source": "/var/lib/docker/volumes/addons-767877/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-767877",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-767877",
	                "name.minikube.sigs.k8s.io": "addons-767877",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "20345ac2be468ba91a09fd7e152a97351d3afcc356bd5df2c07f464fbab12a31",
	            "SandboxKey": "/var/run/docker/netns/20345ac2be46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34301"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34302"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34305"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34303"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34304"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-767877": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6830fa5438c4af4872bd8e8877338ec3c8fbdb0a5061b2fd55580305f7682b2f",
	                    "EndpointID": "2b336b2a24e44d552e0fb9f92d832810b84b5357492c240ca5f38e2da5188569",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "52:52:20:5a:0f:03",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-767877",
	                        "1ab21d83dadc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-767877 -n addons-767877
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-767877 logs -n 25: (1.245863197s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-951167                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-951167   │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ delete  │ -p download-only-122874                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-122874   │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ delete  │ -p download-only-921404                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-921404   │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ delete  │ -p download-only-951167                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-951167   │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ start   │ --download-only -p download-docker-821854 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-821854 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	│ delete  │ -p download-docker-821854                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-821854 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ start   │ --download-only -p binary-mirror-011622 --alsologtostderr --binary-mirror http://127.0.0.1:38139 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-011622   │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	│ delete  │ -p binary-mirror-011622                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-011622   │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ addons  │ enable dashboard -p addons-767877                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	│ addons  │ disable dashboard -p addons-767877                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	│ start   │ -p addons-767877 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:16 UTC │
	│ addons  │ addons-767877 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:16 UTC │                     │
	│ addons  │ addons-767877 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-767877                                                                                                                                                                                                                                                                                                                                                                                           │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-767877 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ ip      │ addons-767877 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-767877 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ ssh     │ addons-767877 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ ssh     │ addons-767877 ssh cat /opt/local-path-provisioner/pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-767877 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ addons-767877 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	│ addons  │ enable headlamp -p addons-767877 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-767877          │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:15:06
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:15:06.114465 1674764 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:15:06.114607 1674764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:15:06.114613 1674764 out.go:374] Setting ErrFile to fd 2...
	I1217 11:15:06.114617 1674764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:15:06.114809 1674764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:15:06.115426 1674764 out.go:368] Setting JSON to false
	I1217 11:15:06.116305 1674764 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":17851,"bootTime":1765952255,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:15:06.116381 1674764 start.go:143] virtualization: kvm guest
	I1217 11:15:06.118487 1674764 out.go:179] * [addons-767877] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:15:06.120344 1674764 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:15:06.120353 1674764 notify.go:221] Checking for updates...
	I1217 11:15:06.123159 1674764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:15:06.124504 1674764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:15:06.125696 1674764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:15:06.126916 1674764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:15:06.128269 1674764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:15:06.129712 1674764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:15:06.154073 1674764 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:15:06.154233 1674764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:15:06.213973 1674764 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-17 11:15:06.204282649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:15:06.214108 1674764 docker.go:319] overlay module found
	I1217 11:15:06.216097 1674764 out.go:179] * Using the docker driver based on user configuration
	I1217 11:15:06.217691 1674764 start.go:309] selected driver: docker
	I1217 11:15:06.217711 1674764 start.go:927] validating driver "docker" against <nil>
	I1217 11:15:06.217729 1674764 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:15:06.218364 1674764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:15:06.275997 1674764 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-17 11:15:06.265923757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:15:06.276152 1674764 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:15:06.276385 1674764 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:15:06.278454 1674764 out.go:179] * Using Docker driver with root privileges
	I1217 11:15:06.279917 1674764 cni.go:84] Creating CNI manager for ""
	I1217 11:15:06.279982 1674764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:15:06.279996 1674764 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:15:06.280067 1674764 start.go:353] cluster config:
	{Name:addons-767877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-767877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 11:15:06.281474 1674764 out.go:179] * Starting "addons-767877" primary control-plane node in "addons-767877" cluster
	I1217 11:15:06.282736 1674764 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:15:06.284083 1674764 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:15:06.285412 1674764 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:15:06.285450 1674764 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:15:06.285462 1674764 cache.go:65] Caching tarball of preloaded images
	I1217 11:15:06.285500 1674764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:15:06.285589 1674764 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:15:06.285603 1674764 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 11:15:06.285956 1674764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/config.json ...
	I1217 11:15:06.285986 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/config.json: {Name:mkeb4331b0b9b75b09c1c790cf4a0f31e90d34b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:06.303578 1674764 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 11:15:06.303710 1674764 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 11:15:06.303728 1674764 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 11:15:06.303733 1674764 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 11:15:06.303741 1674764 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 11:15:06.303748 1674764 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1217 11:15:19.518860 1674764 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1217 11:15:19.518939 1674764 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:15:19.519017 1674764 start.go:360] acquireMachinesLock for addons-767877: {Name:mka931babc38735da6b7f52b3f5f8ca18e84efc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:15:19.519147 1674764 start.go:364] duration metric: took 104.066µs to acquireMachinesLock for "addons-767877"
	I1217 11:15:19.519183 1674764 start.go:93] Provisioning new machine with config: &{Name:addons-767877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-767877 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:15:19.519282 1674764 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:15:19.521207 1674764 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 11:15:19.521489 1674764 start.go:159] libmachine.API.Create for "addons-767877" (driver="docker")
	I1217 11:15:19.521548 1674764 client.go:173] LocalClient.Create starting
	I1217 11:15:19.521682 1674764 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem
	I1217 11:15:19.627142 1674764 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem
	I1217 11:15:19.692960 1674764 cli_runner.go:164] Run: docker network inspect addons-767877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:15:19.710577 1674764 cli_runner.go:211] docker network inspect addons-767877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:15:19.710648 1674764 network_create.go:284] running [docker network inspect addons-767877] to gather additional debugging logs...
	I1217 11:15:19.710673 1674764 cli_runner.go:164] Run: docker network inspect addons-767877
	W1217 11:15:19.727981 1674764 cli_runner.go:211] docker network inspect addons-767877 returned with exit code 1
	I1217 11:15:19.728012 1674764 network_create.go:287] error running [docker network inspect addons-767877]: docker network inspect addons-767877: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-767877 not found
	I1217 11:15:19.728041 1674764 network_create.go:289] output of [docker network inspect addons-767877]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-767877 not found
	
	** /stderr **
	I1217 11:15:19.728162 1674764 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:15:19.746291 1674764 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f89a00}
	I1217 11:15:19.746345 1674764 network_create.go:124] attempt to create docker network addons-767877 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 11:15:19.746393 1674764 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-767877 addons-767877
	I1217 11:15:19.793769 1674764 network_create.go:108] docker network addons-767877 192.168.49.0/24 created
	I1217 11:15:19.793801 1674764 kic.go:121] calculated static IP "192.168.49.2" for the "addons-767877" container
	I1217 11:15:19.793861 1674764 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:15:19.811159 1674764 cli_runner.go:164] Run: docker volume create addons-767877 --label name.minikube.sigs.k8s.io=addons-767877 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:15:19.831370 1674764 oci.go:103] Successfully created a docker volume addons-767877
	I1217 11:15:19.831483 1674764 cli_runner.go:164] Run: docker run --rm --name addons-767877-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-767877 --entrypoint /usr/bin/test -v addons-767877:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:15:22.393273 1674764 cli_runner.go:217] Completed: docker run --rm --name addons-767877-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-767877 --entrypoint /usr/bin/test -v addons-767877:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (2.561726319s)
	I1217 11:15:22.393308 1674764 oci.go:107] Successfully prepared a docker volume addons-767877
	I1217 11:15:22.393364 1674764 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:15:22.393375 1674764 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:15:22.393433 1674764 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-767877:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:15:26.258632 1674764 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-767877:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.865141769s)
	I1217 11:15:26.258679 1674764 kic.go:203] duration metric: took 3.865292775s to extract preloaded images to volume ...
	W1217 11:15:26.258783 1674764 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 11:15:26.258819 1674764 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 11:15:26.258860 1674764 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:15:26.317894 1674764 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-767877 --name addons-767877 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-767877 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-767877 --network addons-767877 --ip 192.168.49.2 --volume addons-767877:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:15:26.603355 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Running}}
	I1217 11:15:26.623422 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:26.644008 1674764 cli_runner.go:164] Run: docker exec addons-767877 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:15:26.693434 1674764 oci.go:144] the created container "addons-767877" has a running status.
	I1217 11:15:26.693466 1674764 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa...
	I1217 11:15:26.749665 1674764 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 11:15:26.789006 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:26.808629 1674764 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:15:26.808657 1674764 kic_runner.go:114] Args: [docker exec --privileged addons-767877 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:15:26.878015 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:26.899057 1674764 machine.go:94] provisionDockerMachine start ...
	I1217 11:15:26.899168 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:26.925732 1674764 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:26.926045 1674764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34301 <nil> <nil>}
	I1217 11:15:26.926064 1674764 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:15:26.927122 1674764 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43040->127.0.0.1:34301: read: connection reset by peer
	I1217 11:15:30.060482 1674764 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-767877
	
	I1217 11:15:30.060517 1674764 ubuntu.go:182] provisioning hostname "addons-767877"
	I1217 11:15:30.060617 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:30.081158 1674764 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:30.081435 1674764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34301 <nil> <nil>}
	I1217 11:15:30.081454 1674764 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-767877 && echo "addons-767877" | sudo tee /etc/hostname
	I1217 11:15:30.223752 1674764 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-767877
	
	I1217 11:15:30.223842 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:30.243897 1674764 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:30.244133 1674764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34301 <nil> <nil>}
	I1217 11:15:30.244150 1674764 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-767877' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-767877/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-767877' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:15:30.376671 1674764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:15:30.376717 1674764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:15:30.376752 1674764 ubuntu.go:190] setting up certificates
	I1217 11:15:30.376771 1674764 provision.go:84] configureAuth start
	I1217 11:15:30.376847 1674764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-767877
	I1217 11:15:30.398209 1674764 provision.go:143] copyHostCerts
	I1217 11:15:30.398323 1674764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:15:30.398519 1674764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:15:30.398658 1674764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:15:30.398749 1674764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.addons-767877 san=[127.0.0.1 192.168.49.2 addons-767877 localhost minikube]
	I1217 11:15:30.499086 1674764 provision.go:177] copyRemoteCerts
	I1217 11:15:30.499166 1674764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:15:30.499218 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:30.519512 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:30.617133 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:15:30.638750 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 11:15:30.658674 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 11:15:30.678344 1674764 provision.go:87] duration metric: took 301.553303ms to configureAuth
	I1217 11:15:30.678382 1674764 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:15:30.678613 1674764 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:15:30.678733 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:30.697884 1674764 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:30.698122 1674764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34301 <nil> <nil>}
	I1217 11:15:30.698138 1674764 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:15:30.978507 1674764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:15:30.978555 1674764 machine.go:97] duration metric: took 4.07945213s to provisionDockerMachine
	I1217 11:15:30.978586 1674764 client.go:176] duration metric: took 11.457026019s to LocalClient.Create
	I1217 11:15:30.978604 1674764 start.go:167] duration metric: took 11.457118325s to libmachine.API.Create "addons-767877"
	I1217 11:15:30.978611 1674764 start.go:293] postStartSetup for "addons-767877" (driver="docker")
	I1217 11:15:30.978621 1674764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:15:30.978683 1674764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:15:30.978721 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:30.998969 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:31.096174 1674764 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:15:31.100106 1674764 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:15:31.100143 1674764 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:15:31.100160 1674764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:15:31.100232 1674764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:15:31.100265 1674764 start.go:296] duration metric: took 121.646716ms for postStartSetup
	I1217 11:15:31.100620 1674764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-767877
	I1217 11:15:31.118981 1674764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/config.json ...
	I1217 11:15:31.119253 1674764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:15:31.119311 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:31.137454 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:31.227831 1674764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:15:31.232441 1674764 start.go:128] duration metric: took 11.713131095s to createHost
	I1217 11:15:31.232469 1674764 start.go:83] releasing machines lock for "addons-767877", held for 11.713307547s
	I1217 11:15:31.232559 1674764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-767877
	I1217 11:15:31.250357 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:15:31.250434 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:15:31.250468 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:15:31.250506 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	W1217 11:15:31.250616 1674764 start.go:789] pre-probe CA setup failed: create ca cert file asset for /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt: stat: stat /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt: no such file or directory
	I1217 11:15:31.250715 1674764 ssh_runner.go:195] Run: cat /version.json
	I1217 11:15:31.250770 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:31.250830 1674764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:15:31.250937 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:31.268778 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:31.270787 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:31.358856 1674764 ssh_runner.go:195] Run: systemctl --version
	I1217 11:15:31.411652 1674764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:15:31.448442 1674764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:15:31.453231 1674764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:15:31.453291 1674764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:15:31.481284 1674764 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 11:15:31.481308 1674764 start.go:496] detecting cgroup driver to use...
	I1217 11:15:31.481347 1674764 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:15:31.481392 1674764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:15:31.498983 1674764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:15:31.512560 1674764 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:15:31.512627 1674764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:15:31.530285 1674764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:15:31.548560 1674764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:15:31.633820 1674764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:15:31.724060 1674764 docker.go:234] disabling docker service ...
	I1217 11:15:31.724132 1674764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:15:31.743940 1674764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:15:31.757394 1674764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:15:31.844943 1674764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:15:31.929733 1674764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:15:31.943023 1674764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:15:31.958060 1674764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:15:31.958117 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:31.969117 1674764 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:15:31.969190 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:31.978672 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:31.987830 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:31.997592 1674764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:15:32.006546 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:32.016010 1674764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:32.030611 1674764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:32.041363 1674764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:15:32.049304 1674764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:15:32.057259 1674764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:15:32.136706 1674764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:15:32.270812 1674764 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:15:32.270897 1674764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:15:32.275242 1674764 start.go:564] Will wait 60s for crictl version
	I1217 11:15:32.275314 1674764 ssh_runner.go:195] Run: which crictl
	I1217 11:15:32.279124 1674764 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:15:32.304271 1674764 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:15:32.304371 1674764 ssh_runner.go:195] Run: crio --version
	I1217 11:15:32.334376 1674764 ssh_runner.go:195] Run: crio --version
	I1217 11:15:32.365733 1674764 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 11:15:32.367152 1674764 cli_runner.go:164] Run: docker network inspect addons-767877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:15:32.386016 1674764 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 11:15:32.390444 1674764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:15:32.401566 1674764 kubeadm.go:884] updating cluster {Name:addons-767877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-767877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:15:32.401741 1674764 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:15:32.401829 1674764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:15:32.436508 1674764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:15:32.436541 1674764 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:15:32.436602 1674764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:15:32.463779 1674764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:15:32.463802 1674764 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:15:32.463811 1674764 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 11:15:32.463916 1674764 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-767877 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-767877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:15:32.463995 1674764 ssh_runner.go:195] Run: crio config
	I1217 11:15:32.511266 1674764 cni.go:84] Creating CNI manager for ""
	I1217 11:15:32.511292 1674764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:15:32.511314 1674764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:15:32.511342 1674764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-767877 NodeName:addons-767877 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:15:32.511497 1674764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-767877"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:15:32.511594 1674764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:15:32.520352 1674764 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:15:32.520416 1674764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:15:32.529143 1674764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 11:15:32.543392 1674764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:15:32.560158 1674764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 11:15:32.574385 1674764 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:15:32.578555 1674764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:15:32.589272 1674764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:15:32.674414 1674764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:15:32.697384 1674764 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877 for IP: 192.168.49.2
	I1217 11:15:32.697408 1674764 certs.go:195] generating shared ca certs ...
	I1217 11:15:32.697430 1674764 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:32.697602 1674764 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:15:32.887689 1674764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt ...
	I1217 11:15:32.887725 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt: {Name:mk4882739fd469c3954287ada1b0e38cfbfbf4a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:32.887928 1674764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key ...
	I1217 11:15:32.887942 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key: {Name:mk1b9b63fe2ddf00e259a101090a4a6e1bd0e44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:32.888018 1674764 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:15:33.017709 1674764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt ...
	I1217 11:15:33.017749 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt: {Name:mkfefe589cf8069373711c0cc560b187f2d4aab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.017929 1674764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key ...
	I1217 11:15:33.017942 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key: {Name:mk75206f47b10a40f6c55cbdc3cbc8af52e382ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.018023 1674764 certs.go:257] generating profile certs ...
	I1217 11:15:33.018082 1674764 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.key
	I1217 11:15:33.018097 1674764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt with IP's: []
	I1217 11:15:33.194733 1674764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt ...
	I1217 11:15:33.194777 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: {Name:mk1c2c064493309c6e1adec623609d0753c01230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.194967 1674764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.key ...
	I1217 11:15:33.194979 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.key: {Name:mkc6610e0c5152a4cc6f2bfe0238bd6a86fc868a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.195051 1674764 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key.5efcf61a
	I1217 11:15:33.195070 1674764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt.5efcf61a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 11:15:33.234173 1674764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt.5efcf61a ...
	I1217 11:15:33.234202 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt.5efcf61a: {Name:mk87f54c754bcb7639ff086b5eb3c02f2f042164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.234400 1674764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key.5efcf61a ...
	I1217 11:15:33.234415 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key.5efcf61a: {Name:mk630f3083d13c62827302b94c60c0a3076d37e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.234496 1674764 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt.5efcf61a -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt
	I1217 11:15:33.234612 1674764 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key.5efcf61a -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key
	I1217 11:15:33.234681 1674764 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.key
	I1217 11:15:33.234704 1674764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.crt with IP's: []
	I1217 11:15:33.266045 1674764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.crt ...
	I1217 11:15:33.266077 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.crt: {Name:mk655ebc418e7414edef9c1e9923b774e576ba24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.266242 1674764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.key ...
	I1217 11:15:33.266258 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.key: {Name:mka920f2e6ef710d61dafb5798cdc1b38d5a7abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:33.266430 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:15:33.266511 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:15:33.266557 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:15:33.266599 1674764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:15:33.267217 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:15:33.286836 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:15:33.305257 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:15:33.324225 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:15:33.343170 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 11:15:33.361338 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:15:33.380164 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:15:33.398933 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:15:33.417425 1674764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:15:33.438333 1674764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:15:33.451773 1674764 ssh_runner.go:195] Run: openssl version
	I1217 11:15:33.458333 1674764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:33.466626 1674764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:15:33.477631 1674764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:33.482006 1674764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:33.482065 1674764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:33.516993 1674764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:15:33.525550 1674764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:15:33.533954 1674764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:15:33.538147 1674764 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:15:33.538240 1674764 kubeadm.go:401] StartCluster: {Name:addons-767877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-767877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:15:33.538341 1674764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:15:33.538418 1674764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:15:33.572833 1674764 cri.go:89] found id: ""
	I1217 11:15:33.572933 1674764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:15:33.581515 1674764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:15:33.590189 1674764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:15:33.590252 1674764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:15:33.598421 1674764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:15:33.598457 1674764 kubeadm.go:158] found existing configuration files:
	
	I1217 11:15:33.598512 1674764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:15:33.606627 1674764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:15:33.606683 1674764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:15:33.614520 1674764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:15:33.622568 1674764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:15:33.622640 1674764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:15:33.630334 1674764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:15:33.638281 1674764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:15:33.638333 1674764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:15:33.646090 1674764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:15:33.654322 1674764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:15:33.654379 1674764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:15:33.662432 1674764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:15:33.701495 1674764 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 11:15:33.701567 1674764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:15:33.723934 1674764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:15:33.724026 1674764 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:15:33.724068 1674764 kubeadm.go:319] OS: Linux
	I1217 11:15:33.724123 1674764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:15:33.724168 1674764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:15:33.724214 1674764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:15:33.724256 1674764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:15:33.724296 1674764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:15:33.724384 1674764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:15:33.724482 1674764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:15:33.724555 1674764 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 11:15:33.784667 1674764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:15:33.784806 1674764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:15:33.785005 1674764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:15:33.793118 1674764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:15:33.795464 1674764 out.go:252]   - Generating certificates and keys ...
	I1217 11:15:33.795604 1674764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:15:33.795707 1674764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:15:33.999124 1674764 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:15:34.137796 1674764 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:15:34.412578 1674764 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:15:34.632112 1674764 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:15:34.806825 1674764 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:15:34.806969 1674764 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-767877 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 11:15:34.942934 1674764 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:15:34.943121 1674764 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-767877 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 11:15:35.200820 1674764 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:15:35.594983 1674764 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:15:36.110517 1674764 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:15:36.110654 1674764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:15:36.182240 1674764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:15:36.487965 1674764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:15:36.986595 1674764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:15:37.353997 1674764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:15:38.249958 1674764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:15:38.250267 1674764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:15:38.255198 1674764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:15:38.256768 1674764 out.go:252]   - Booting up control plane ...
	I1217 11:15:38.256903 1674764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:15:38.257031 1674764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:15:38.257719 1674764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:15:38.272014 1674764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:15:38.272169 1674764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:15:38.278632 1674764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:15:38.278751 1674764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:15:38.278830 1674764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:15:38.379862 1674764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:15:38.380037 1674764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:15:39.380847 1674764 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001145644s
	I1217 11:15:39.383816 1674764 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 11:15:39.383911 1674764 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 11:15:39.383995 1674764 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 11:15:39.384126 1674764 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 11:15:40.701470 1674764 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.317491754s
	I1217 11:15:41.948655 1674764 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.564798441s
	I1217 11:15:43.885735 1674764 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501781912s
	I1217 11:15:43.902445 1674764 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:15:43.915295 1674764 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:15:43.925306 1674764 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:15:43.925677 1674764 kubeadm.go:319] [mark-control-plane] Marking the node addons-767877 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:15:43.934699 1674764 kubeadm.go:319] [bootstrap-token] Using token: piq0we.dhk2ndq2cma16lft
	I1217 11:15:43.936706 1674764 out.go:252]   - Configuring RBAC rules ...
	I1217 11:15:43.936873 1674764 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:15:43.943316 1674764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:15:43.949204 1674764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:15:43.952023 1674764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:15:43.954918 1674764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:15:43.959243 1674764 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:15:44.292559 1674764 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:15:44.708976 1674764 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:15:45.292197 1674764 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:15:45.293348 1674764 kubeadm.go:319] 
	I1217 11:15:45.293450 1674764 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:15:45.293461 1674764 kubeadm.go:319] 
	I1217 11:15:45.293600 1674764 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:15:45.293611 1674764 kubeadm.go:319] 
	I1217 11:15:45.293648 1674764 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:15:45.293735 1674764 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:15:45.293785 1674764 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:15:45.293817 1674764 kubeadm.go:319] 
	I1217 11:15:45.293879 1674764 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:15:45.293890 1674764 kubeadm.go:319] 
	I1217 11:15:45.293963 1674764 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:15:45.293976 1674764 kubeadm.go:319] 
	I1217 11:15:45.294053 1674764 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:15:45.294127 1674764 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:15:45.294209 1674764 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:15:45.294226 1674764 kubeadm.go:319] 
	I1217 11:15:45.294296 1674764 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:15:45.294383 1674764 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:15:45.294391 1674764 kubeadm.go:319] 
	I1217 11:15:45.294471 1674764 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token piq0we.dhk2ndq2cma16lft \
	I1217 11:15:45.294639 1674764 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:15:45.294669 1674764 kubeadm.go:319] 	--control-plane 
	I1217 11:15:45.294673 1674764 kubeadm.go:319] 
	I1217 11:15:45.294745 1674764 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:15:45.294757 1674764 kubeadm.go:319] 
	I1217 11:15:45.294871 1674764 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token piq0we.dhk2ndq2cma16lft \
	I1217 11:15:45.294961 1674764 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:15:45.297200 1674764 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:15:45.297325 1674764 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:15:45.297364 1674764 cni.go:84] Creating CNI manager for ""
	I1217 11:15:45.297379 1674764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:15:45.299607 1674764 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 11:15:45.301167 1674764 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:15:45.305858 1674764 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 11:15:45.305883 1674764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:15:45.320247 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:15:45.537713 1674764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:15:45.537786 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-767877 minikube.k8s.io/updated_at=2025_12_17T11_15_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=addons-767877 minikube.k8s.io/primary=true
	I1217 11:15:45.537786 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:45.549843 1674764 ops.go:34] apiserver oom_adj: -16
	I1217 11:15:45.629360 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:46.129837 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:46.630166 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:47.130167 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:47.629516 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:48.129857 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:48.629668 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:49.129775 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:49.630480 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:50.130070 1674764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:50.206854 1674764 kubeadm.go:1114] duration metric: took 4.669130286s to wait for elevateKubeSystemPrivileges
	I1217 11:15:50.206891 1674764 kubeadm.go:403] duration metric: took 16.668683367s to StartCluster
	I1217 11:15:50.206912 1674764 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:50.207031 1674764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:15:50.207568 1674764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:50.207808 1674764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:15:50.207828 1674764 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:15:50.207910 1674764 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 11:15:50.208044 1674764 addons.go:70] Setting yakd=true in profile "addons-767877"
	I1217 11:15:50.208052 1674764 addons.go:70] Setting default-storageclass=true in profile "addons-767877"
	I1217 11:15:50.208063 1674764 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:15:50.208074 1674764 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-767877"
	I1217 11:15:50.208073 1674764 addons.go:70] Setting cloud-spanner=true in profile "addons-767877"
	I1217 11:15:50.208095 1674764 addons.go:70] Setting storage-provisioner=true in profile "addons-767877"
	I1217 11:15:50.208107 1674764 addons.go:239] Setting addon storage-provisioner=true in "addons-767877"
	I1217 11:15:50.208119 1674764 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-767877"
	I1217 11:15:50.208122 1674764 addons.go:70] Setting ingress-dns=true in profile "addons-767877"
	I1217 11:15:50.208130 1674764 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-767877"
	I1217 11:15:50.208135 1674764 addons.go:239] Setting addon ingress-dns=true in "addons-767877"
	I1217 11:15:50.208152 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208171 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208173 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208152 1674764 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-767877"
	I1217 11:15:50.208178 1674764 addons.go:70] Setting gcp-auth=true in profile "addons-767877"
	I1217 11:15:50.208211 1674764 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-767877"
	I1217 11:15:50.208223 1674764 mustload.go:66] Loading cluster: addons-767877
	I1217 11:15:50.208262 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208451 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208481 1674764 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:15:50.208612 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208664 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208690 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208762 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208865 1674764 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-767877"
	I1217 11:15:50.208924 1674764 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-767877"
	I1217 11:15:50.208954 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.209009 1674764 addons.go:70] Setting volcano=true in profile "addons-767877"
	I1217 11:15:50.209028 1674764 addons.go:239] Setting addon volcano=true in "addons-767877"
	I1217 11:15:50.209066 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208076 1674764 addons.go:70] Setting registry=true in profile "addons-767877"
	I1217 11:15:50.209203 1674764 addons.go:239] Setting addon registry=true in "addons-767877"
	I1217 11:15:50.209237 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.209397 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.209561 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.209712 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208110 1674764 addons.go:239] Setting addon cloud-spanner=true in "addons-767877"
	I1217 11:15:50.209934 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.210401 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.210933 1674764 addons.go:70] Setting inspektor-gadget=true in profile "addons-767877"
	I1217 11:15:50.210960 1674764 addons.go:239] Setting addon inspektor-gadget=true in "addons-767877"
	I1217 11:15:50.210990 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.211489 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.211623 1674764 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-767877"
	I1217 11:15:50.211645 1674764 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-767877"
	I1217 11:15:50.211972 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.208068 1674764 addons.go:239] Setting addon yakd=true in "addons-767877"
	I1217 11:15:50.212579 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208086 1674764 addons.go:70] Setting registry-creds=true in profile "addons-767877"
	I1217 11:15:50.215653 1674764 addons.go:239] Setting addon registry-creds=true in "addons-767877"
	I1217 11:15:50.215713 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.208766 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.213706 1674764 addons.go:70] Setting ingress=true in profile "addons-767877"
	I1217 11:15:50.215910 1674764 addons.go:239] Setting addon ingress=true in "addons-767877"
	I1217 11:15:50.215971 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.213754 1674764 out.go:179] * Verifying Kubernetes components...
	I1217 11:15:50.216129 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.213963 1674764 addons.go:70] Setting volumesnapshots=true in profile "addons-767877"
	I1217 11:15:50.216225 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.216235 1674764 addons.go:239] Setting addon volumesnapshots=true in "addons-767877"
	I1217 11:15:50.216281 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.213981 1674764 addons.go:70] Setting metrics-server=true in profile "addons-767877"
	I1217 11:15:50.216707 1674764 addons.go:239] Setting addon metrics-server=true in "addons-767877"
	I1217 11:15:50.216800 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.217911 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.221962 1674764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:15:50.224004 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.224179 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.259849 1674764 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:15:50.261358 1674764 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:15:50.261381 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:15:50.261469 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.266074 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 11:15:50.267427 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 11:15:50.268872 1674764 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 11:15:50.268942 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 11:15:50.273135 1674764 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 11:15:50.273156 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 11:15:50.273227 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.274849 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 11:15:50.274927 1674764 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 11:15:50.276463 1674764 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 11:15:50.276488 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 11:15:50.276585 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.276787 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 11:15:50.279162 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 11:15:50.280585 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 11:15:50.282808 1674764 addons.go:239] Setting addon default-storageclass=true in "addons-767877"
	I1217 11:15:50.285149 1674764 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 11:15:50.286504 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.290022 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.290394 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 11:15:50.290488 1674764 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 11:15:50.290503 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 11:15:50.290572 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.290421 1674764 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-767877"
	I1217 11:15:50.290715 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.290775 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:50.291242 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:50.297435 1674764 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1217 11:15:50.298875 1674764 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 11:15:50.298912 1674764 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 11:15:50.298974 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.299146 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 11:15:50.299158 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 11:15:50.299215 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.299820 1674764 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 11:15:50.305616 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 11:15:50.305644 1674764 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 11:15:50.305740 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.305986 1674764 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 11:15:50.310932 1674764 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 11:15:50.322695 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 11:15:50.320009 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.322913 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	W1217 11:15:50.322652 1674764 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 11:15:50.331076 1674764 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 11:15:50.331155 1674764 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 11:15:50.331078 1674764 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 11:15:50.331078 1674764 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 11:15:50.332877 1674764 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 11:15:50.332916 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 11:15:50.332980 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.333153 1674764 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 11:15:50.333176 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 11:15:50.333238 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.335508 1674764 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 11:15:50.335626 1674764 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 11:15:50.335639 1674764 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 11:15:50.335723 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.347990 1674764 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 11:15:50.349650 1674764 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 11:15:50.349679 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 11:15:50.349765 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.358667 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.362117 1674764 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 11:15:50.368246 1674764 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 11:15:50.372111 1674764 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 11:15:50.372972 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 11:15:50.373213 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.380228 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.380727 1674764 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:15:50.380745 1674764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:15:50.380813 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.388109 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.388642 1674764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:15:50.396393 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.397611 1674764 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 11:15:50.397824 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.403005 1674764 out.go:179]   - Using image docker.io/busybox:stable
	I1217 11:15:50.404351 1674764 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 11:15:50.404369 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 11:15:50.404449 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:50.404834 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.411981 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.416389 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.424241 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.428764 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.435293 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.441190 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.452635 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:50.452842 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	W1217 11:15:50.455375 1674764 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 11:15:50.455421 1674764 retry.go:31] will retry after 343.863702ms: ssh: handshake failed: EOF
	I1217 11:15:50.456700 1674764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:15:50.552037 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 11:15:50.575355 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 11:15:50.576136 1674764 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 11:15:50.576164 1674764 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 11:15:50.587993 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:15:50.592977 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 11:15:50.593001 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 11:15:50.606157 1674764 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 11:15:50.606183 1674764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 11:15:50.610294 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 11:15:50.615186 1674764 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 11:15:50.615217 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 11:15:50.620059 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:15:50.623101 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 11:15:50.626626 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 11:15:50.629041 1674764 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 11:15:50.629067 1674764 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 11:15:50.637696 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 11:15:50.647286 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 11:15:50.647312 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 11:15:50.648294 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 11:15:50.653119 1674764 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 11:15:50.653265 1674764 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 11:15:50.659832 1674764 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 11:15:50.659858 1674764 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 11:15:50.660150 1674764 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 11:15:50.660167 1674764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 11:15:50.682049 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 11:15:50.682154 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 11:15:50.685794 1674764 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 11:15:50.685815 1674764 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 11:15:50.707254 1674764 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 11:15:50.707341 1674764 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 11:15:50.717493 1674764 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 11:15:50.717614 1674764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 11:15:50.718641 1674764 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 11:15:50.718716 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 11:15:50.751315 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 11:15:50.751345 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 11:15:50.757326 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 11:15:50.769285 1674764 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 11:15:50.769305 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 11:15:50.774231 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 11:15:50.774340 1674764 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 11:15:50.788331 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 11:15:50.808260 1674764 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 11:15:50.808289 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 11:15:50.811313 1674764 node_ready.go:35] waiting up to 6m0s for node "addons-767877" to be "Ready" ...
	I1217 11:15:50.811803 1674764 start.go:1013] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 11:15:50.813435 1674764 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 11:15:50.813454 1674764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 11:15:50.845704 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 11:15:50.888957 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 11:15:50.889049 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 11:15:50.890981 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 11:15:50.975672 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 11:15:50.975706 1674764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 11:15:51.041756 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 11:15:51.041787 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 11:15:51.047141 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 11:15:51.098080 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 11:15:51.098105 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 11:15:51.148594 1674764 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 11:15:51.148631 1674764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 11:15:51.211938 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 11:15:51.317464 1674764 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-767877" context rescaled to 1 replicas
	I1217 11:15:51.641019 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.003280766s)
	I1217 11:15:51.641379 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.014722735s)
	I1217 11:15:51.958876 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.310544014s)
	I1217 11:15:51.958931 1674764 addons.go:495] Verifying addon ingress=true in "addons-767877"
	I1217 11:15:51.959282 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.201845876s)
	I1217 11:15:51.959329 1674764 addons.go:495] Verifying addon metrics-server=true in "addons-767877"
	I1217 11:15:51.959385 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.170929584s)
	I1217 11:15:51.959407 1674764 addons.go:495] Verifying addon registry=true in "addons-767877"
	I1217 11:15:51.959546 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.113797063s)
	I1217 11:15:51.960239 1674764 out.go:179] * Verifying ingress addon...
	I1217 11:15:51.961286 1674764 out.go:179] * Verifying registry addon...
	I1217 11:15:51.963104 1674764 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 11:15:51.963351 1674764 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-767877 service yakd-dashboard -n yakd-dashboard
	
	I1217 11:15:51.964938 1674764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 11:15:51.975794 1674764 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 11:15:51.975822 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:51.977766 1674764 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 11:15:51.977791 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:52.468625 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:52.469200 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:52.475371 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.584343307s)
	W1217 11:15:52.475440 1674764 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 11:15:52.475467 1674764 retry.go:31] will retry after 288.762059ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 11:15:52.475578 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.428408093s)
	I1217 11:15:52.476015 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.264032382s)
	I1217 11:15:52.476046 1674764 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-767877"
	I1217 11:15:52.477967 1674764 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 11:15:52.483953 1674764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 11:15:52.497956 1674764 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 11:15:52.497995 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:52.765300 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1217 11:15:52.815039 1674764 node_ready.go:57] node "addons-767877" has "Ready":"False" status (will retry)
	I1217 11:15:52.967414 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:52.967606 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:52.987347 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:53.467542 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:53.467705 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:53.569984 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:53.967432 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:53.968365 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:53.987396 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:54.466958 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:54.468290 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:54.487521 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1217 11:15:54.815256 1674764 node_ready.go:57] node "addons-767877" has "Ready":"False" status (will retry)
	I1217 11:15:54.966668 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:54.967599 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:54.987313 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:55.271800 1674764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.506451965s)
	I1217 11:15:55.467390 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:55.467818 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:55.487304 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:55.967717 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:55.967771 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:55.987458 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:56.467137 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:56.468625 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:56.487687 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:56.967632 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:56.967737 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:56.987605 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1217 11:15:57.314674 1674764 node_ready.go:57] node "addons-767877" has "Ready":"False" status (will retry)
	I1217 11:15:57.467221 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:57.468371 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:57.487448 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:57.899285 1674764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 11:15:57.899360 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:57.918094 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:57.967231 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:57.968366 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:57.987750 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:58.027257 1674764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 11:15:58.040827 1674764 addons.go:239] Setting addon gcp-auth=true in "addons-767877"
	I1217 11:15:58.040881 1674764 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:15:58.041219 1674764 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:15:58.060014 1674764 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 11:15:58.060081 1674764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:15:58.079944 1674764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:15:58.171684 1674764 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 11:15:58.172875 1674764 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 11:15:58.174033 1674764 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 11:15:58.174051 1674764 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 11:15:58.188075 1674764 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 11:15:58.188101 1674764 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 11:15:58.201260 1674764 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 11:15:58.201285 1674764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 11:15:58.214403 1674764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 11:15:58.466815 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:58.467082 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:58.505337 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:58.531836 1674764 addons.go:495] Verifying addon gcp-auth=true in "addons-767877"
	I1217 11:15:58.533330 1674764 out.go:179] * Verifying gcp-auth addon...
	I1217 11:15:58.538008 1674764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 11:15:58.567236 1674764 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 11:15:58.567259 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:15:58.966418 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:58.967776 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:58.987306 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:59.040989 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 11:15:59.315028 1674764 node_ready.go:57] node "addons-767877" has "Ready":"False" status (will retry)
	I1217 11:15:59.467649 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:59.467692 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:59.487705 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:15:59.568855 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:15:59.966276 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:15:59.967773 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:15:59.987438 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:00.041309 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:00.466750 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:00.468415 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:00.487793 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:00.541445 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:00.966895 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:00.968501 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:00.987891 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:01.042107 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:01.466731 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:01.468175 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:01.487430 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:01.542086 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 11:16:01.815147 1674764 node_ready.go:57] node "addons-767877" has "Ready":"False" status (will retry)
	I1217 11:16:01.967296 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:01.967897 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:01.987789 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:02.041684 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:02.467027 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:02.468665 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:02.487860 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:02.542222 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:02.966695 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:02.967664 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:02.987679 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:03.041407 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:03.467061 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:03.468658 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:03.487508 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:03.541506 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:03.966467 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:03.968724 1674764 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 11:16:03.968747 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:03.987707 1674764 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 11:16:03.987731 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:04.040819 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:04.315950 1674764 node_ready.go:49] node "addons-767877" is "Ready"
	I1217 11:16:04.315985 1674764 node_ready.go:38] duration metric: took 13.504627777s for node "addons-767877" to be "Ready" ...
	I1217 11:16:04.316071 1674764 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:16:04.316185 1674764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:16:04.335331 1674764 api_server.go:72] duration metric: took 14.127421304s to wait for apiserver process to appear ...
	I1217 11:16:04.335368 1674764 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:16:04.335393 1674764 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 11:16:04.342759 1674764 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 11:16:04.344001 1674764 api_server.go:141] control plane version: v1.34.3
	I1217 11:16:04.344054 1674764 api_server.go:131] duration metric: took 8.678237ms to wait for apiserver health ...
	I1217 11:16:04.344066 1674764 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:16:04.348966 1674764 system_pods.go:59] 20 kube-system pods found
	I1217 11:16:04.349011 1674764 system_pods.go:61] "amd-gpu-device-plugin-54g7h" [0d30afbe-138e-4eec-b4f9-dc3c0a8c9362] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 11:16:04.349025 1674764 system_pods.go:61] "coredns-66bc5c9577-bk7js" [93210791-8ce9-43e9-9da6-e86d9de52b6f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:16:04.349038 1674764 system_pods.go:61] "csi-hostpath-attacher-0" [eec1eed5-47ac-49ed-a8be-dee549fb94bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 11:16:04.349045 1674764 system_pods.go:61] "csi-hostpath-resizer-0" [5eaaa85c-9ad5-41d8-ac6c-8c4fa13a517c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 11:16:04.349054 1674764 system_pods.go:61] "csi-hostpathplugin-swlsr" [c3ad9360-2599-4b66-a906-94b66525daf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 11:16:04.349060 1674764 system_pods.go:61] "etcd-addons-767877" [e87b5478-e253-4eef-bdbe-d72caaad1864] Running
	I1217 11:16:04.349065 1674764 system_pods.go:61] "kindnet-nkfjh" [d5de55b0-a578-4d51-b058-d52c5a57ab72] Running
	I1217 11:16:04.349070 1674764 system_pods.go:61] "kube-apiserver-addons-767877" [1f0007be-20ae-4b96-a9ef-6f086ff6e9eb] Running
	I1217 11:16:04.349075 1674764 system_pods.go:61] "kube-controller-manager-addons-767877" [fb8ee463-4ef0-4a87-86cd-fd57584b3815] Running
	I1217 11:16:04.349083 1674764 system_pods.go:61] "kube-ingress-dns-minikube" [d0f34d27-69e5-47e3-b44b-96bbc77f4dfe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 11:16:04.349088 1674764 system_pods.go:61] "kube-proxy-dmglt" [93e628dd-43c4-40f5-9d00-5eaeb986dcbd] Running
	I1217 11:16:04.349095 1674764 system_pods.go:61] "kube-scheduler-addons-767877" [0933f37d-9c50-40e4-9e8a-3adba17f3f11] Running
	I1217 11:16:04.349102 1674764 system_pods.go:61] "metrics-server-85b7d694d7-q89cn" [4fe34e06-742e-4967-a029-2bcdc2026e59] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 11:16:04.349111 1674764 system_pods.go:61] "nvidia-device-plugin-daemonset-29qcw" [126659ae-963b-4c25-b391-6b0e5bc691f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 11:16:04.349128 1674764 system_pods.go:61] "registry-6b586f9694-lc6z2" [77026c72-37e6-4dc9-9673-5b57193721c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 11:16:04.349146 1674764 system_pods.go:61] "registry-creds-764b6fb674-crd5v" [ae965757-a78e-4e4e-b450-21f182854184] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 11:16:04.349154 1674764 system_pods.go:61] "registry-proxy-ffwc5" [e44db6b2-7737-4ce0-a9de-3dee51ff3715] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 11:16:04.349165 1674764 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2jdlm" [1319b2f8-69a5-401c-a890-f3b2110a9af0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.349183 1674764 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dm88z" [078ab857-613a-47a2-9d88-499a5c525f59] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.349194 1674764 system_pods.go:61] "storage-provisioner" [d4bd042c-c801-4c4a-98a4-b825d20aad52] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:16:04.349206 1674764 system_pods.go:74] duration metric: took 5.131864ms to wait for pod list to return data ...
	I1217 11:16:04.349220 1674764 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:16:04.351654 1674764 default_sa.go:45] found service account: "default"
	I1217 11:16:04.351682 1674764 default_sa.go:55] duration metric: took 2.454472ms for default service account to be created ...
	I1217 11:16:04.351694 1674764 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:16:04.449634 1674764 system_pods.go:86] 20 kube-system pods found
	I1217 11:16:04.449671 1674764 system_pods.go:89] "amd-gpu-device-plugin-54g7h" [0d30afbe-138e-4eec-b4f9-dc3c0a8c9362] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 11:16:04.449688 1674764 system_pods.go:89] "coredns-66bc5c9577-bk7js" [93210791-8ce9-43e9-9da6-e86d9de52b6f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:16:04.449696 1674764 system_pods.go:89] "csi-hostpath-attacher-0" [eec1eed5-47ac-49ed-a8be-dee549fb94bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 11:16:04.449701 1674764 system_pods.go:89] "csi-hostpath-resizer-0" [5eaaa85c-9ad5-41d8-ac6c-8c4fa13a517c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 11:16:04.449707 1674764 system_pods.go:89] "csi-hostpathplugin-swlsr" [c3ad9360-2599-4b66-a906-94b66525daf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 11:16:04.449711 1674764 system_pods.go:89] "etcd-addons-767877" [e87b5478-e253-4eef-bdbe-d72caaad1864] Running
	I1217 11:16:04.449715 1674764 system_pods.go:89] "kindnet-nkfjh" [d5de55b0-a578-4d51-b058-d52c5a57ab72] Running
	I1217 11:16:04.449719 1674764 system_pods.go:89] "kube-apiserver-addons-767877" [1f0007be-20ae-4b96-a9ef-6f086ff6e9eb] Running
	I1217 11:16:04.449723 1674764 system_pods.go:89] "kube-controller-manager-addons-767877" [fb8ee463-4ef0-4a87-86cd-fd57584b3815] Running
	I1217 11:16:04.449729 1674764 system_pods.go:89] "kube-ingress-dns-minikube" [d0f34d27-69e5-47e3-b44b-96bbc77f4dfe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 11:16:04.449733 1674764 system_pods.go:89] "kube-proxy-dmglt" [93e628dd-43c4-40f5-9d00-5eaeb986dcbd] Running
	I1217 11:16:04.449737 1674764 system_pods.go:89] "kube-scheduler-addons-767877" [0933f37d-9c50-40e4-9e8a-3adba17f3f11] Running
	I1217 11:16:04.449751 1674764 system_pods.go:89] "metrics-server-85b7d694d7-q89cn" [4fe34e06-742e-4967-a029-2bcdc2026e59] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 11:16:04.449759 1674764 system_pods.go:89] "nvidia-device-plugin-daemonset-29qcw" [126659ae-963b-4c25-b391-6b0e5bc691f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 11:16:04.449770 1674764 system_pods.go:89] "registry-6b586f9694-lc6z2" [77026c72-37e6-4dc9-9673-5b57193721c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 11:16:04.449777 1674764 system_pods.go:89] "registry-creds-764b6fb674-crd5v" [ae965757-a78e-4e4e-b450-21f182854184] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 11:16:04.449782 1674764 system_pods.go:89] "registry-proxy-ffwc5" [e44db6b2-7737-4ce0-a9de-3dee51ff3715] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 11:16:04.449790 1674764 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2jdlm" [1319b2f8-69a5-401c-a890-f3b2110a9af0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.449795 1674764 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dm88z" [078ab857-613a-47a2-9d88-499a5c525f59] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.449803 1674764 system_pods.go:89] "storage-provisioner" [d4bd042c-c801-4c4a-98a4-b825d20aad52] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:16:04.449821 1674764 retry.go:31] will retry after 260.899758ms: missing components: kube-dns
	I1217 11:16:04.467279 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:04.467555 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:04.487913 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:04.541545 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:04.716272 1674764 system_pods.go:86] 20 kube-system pods found
	I1217 11:16:04.716308 1674764 system_pods.go:89] "amd-gpu-device-plugin-54g7h" [0d30afbe-138e-4eec-b4f9-dc3c0a8c9362] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 11:16:04.716315 1674764 system_pods.go:89] "coredns-66bc5c9577-bk7js" [93210791-8ce9-43e9-9da6-e86d9de52b6f] Running
	I1217 11:16:04.716322 1674764 system_pods.go:89] "csi-hostpath-attacher-0" [eec1eed5-47ac-49ed-a8be-dee549fb94bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 11:16:04.716330 1674764 system_pods.go:89] "csi-hostpath-resizer-0" [5eaaa85c-9ad5-41d8-ac6c-8c4fa13a517c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 11:16:04.716339 1674764 system_pods.go:89] "csi-hostpathplugin-swlsr" [c3ad9360-2599-4b66-a906-94b66525daf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 11:16:04.716344 1674764 system_pods.go:89] "etcd-addons-767877" [e87b5478-e253-4eef-bdbe-d72caaad1864] Running
	I1217 11:16:04.716349 1674764 system_pods.go:89] "kindnet-nkfjh" [d5de55b0-a578-4d51-b058-d52c5a57ab72] Running
	I1217 11:16:04.716353 1674764 system_pods.go:89] "kube-apiserver-addons-767877" [1f0007be-20ae-4b96-a9ef-6f086ff6e9eb] Running
	I1217 11:16:04.716357 1674764 system_pods.go:89] "kube-controller-manager-addons-767877" [fb8ee463-4ef0-4a87-86cd-fd57584b3815] Running
	I1217 11:16:04.716363 1674764 system_pods.go:89] "kube-ingress-dns-minikube" [d0f34d27-69e5-47e3-b44b-96bbc77f4dfe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 11:16:04.716369 1674764 system_pods.go:89] "kube-proxy-dmglt" [93e628dd-43c4-40f5-9d00-5eaeb986dcbd] Running
	I1217 11:16:04.716373 1674764 system_pods.go:89] "kube-scheduler-addons-767877" [0933f37d-9c50-40e4-9e8a-3adba17f3f11] Running
	I1217 11:16:04.716379 1674764 system_pods.go:89] "metrics-server-85b7d694d7-q89cn" [4fe34e06-742e-4967-a029-2bcdc2026e59] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 11:16:04.716387 1674764 system_pods.go:89] "nvidia-device-plugin-daemonset-29qcw" [126659ae-963b-4c25-b391-6b0e5bc691f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 11:16:04.716392 1674764 system_pods.go:89] "registry-6b586f9694-lc6z2" [77026c72-37e6-4dc9-9673-5b57193721c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 11:16:04.716400 1674764 system_pods.go:89] "registry-creds-764b6fb674-crd5v" [ae965757-a78e-4e4e-b450-21f182854184] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 11:16:04.716405 1674764 system_pods.go:89] "registry-proxy-ffwc5" [e44db6b2-7737-4ce0-a9de-3dee51ff3715] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 11:16:04.716417 1674764 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2jdlm" [1319b2f8-69a5-401c-a890-f3b2110a9af0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.716425 1674764 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dm88z" [078ab857-613a-47a2-9d88-499a5c525f59] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:04.716431 1674764 system_pods.go:89] "storage-provisioner" [d4bd042c-c801-4c4a-98a4-b825d20aad52] Running
	I1217 11:16:04.716441 1674764 system_pods.go:126] duration metric: took 364.740674ms to wait for k8s-apps to be running ...
	I1217 11:16:04.716450 1674764 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:16:04.716495 1674764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:16:04.730332 1674764 system_svc.go:56] duration metric: took 13.86904ms WaitForService to wait for kubelet
	I1217 11:16:04.730366 1674764 kubeadm.go:587] duration metric: took 14.522509009s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:16:04.730394 1674764 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:16:04.733705 1674764 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:16:04.733735 1674764 node_conditions.go:123] node cpu capacity is 8
	I1217 11:16:04.733751 1674764 node_conditions.go:105] duration metric: took 3.351782ms to run NodePressure ...
	I1217 11:16:04.733762 1674764 start.go:242] waiting for startup goroutines ...
	I1217 11:16:04.967006 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:04.967498 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:04.987672 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:05.041383 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:05.467445 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:05.468622 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:05.487890 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:05.541430 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:05.967963 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:05.968303 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:05.987986 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:06.041841 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:06.467017 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:06.468623 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:06.487816 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:06.541481 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:06.967362 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:06.969038 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:06.988916 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:07.042029 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:07.467039 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:07.468605 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:07.488384 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:07.541841 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:07.968287 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:07.968594 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:07.988080 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:08.042384 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:08.467868 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:08.468359 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:08.488010 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:08.542510 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:08.967421 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:08.968260 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:08.987343 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:09.041194 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:09.467502 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:09.468603 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:09.567955 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:09.568217 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:09.966752 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:09.967640 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:09.987776 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:10.041764 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:10.467839 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:10.467893 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:10.487927 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:10.541846 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:10.969949 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:10.970018 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:10.988072 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:11.044415 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:11.467035 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:11.468733 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:11.488191 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:11.542479 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:11.967642 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:11.968320 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:11.988144 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:12.042337 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:12.467639 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:12.468426 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:12.488230 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:12.542871 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:12.968020 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:12.968055 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:12.987863 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:13.042086 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:13.466608 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:13.468588 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:13.488146 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:13.542193 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:13.968335 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:13.968366 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:13.987893 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:14.041863 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:14.468140 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:14.468207 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:14.487711 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:14.541846 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:14.966665 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:14.968028 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:14.988398 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:15.041110 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:15.467255 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:15.468226 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:15.487512 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:15.703575 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:15.984438 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:15.984578 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:15.986901 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:16.041873 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:16.467681 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:16.468329 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:16.490681 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:16.541525 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:16.967458 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:16.967515 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:16.987320 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:17.041006 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:17.467332 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:17.468111 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:17.487602 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:17.541819 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:17.967448 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:17.967484 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:17.987594 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:18.068033 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:18.467713 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:18.467742 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:18.487796 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:18.541321 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:18.968704 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:18.969203 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:18.987616 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:19.040862 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:19.467825 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:19.468558 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:19.488573 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:19.541868 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:19.968100 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:19.968112 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:19.988979 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:20.042041 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:20.467867 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:20.468463 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:20.488586 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:20.541898 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:20.966469 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:20.967984 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:20.988472 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:21.041420 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:21.467520 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:21.468622 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:21.487627 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:21.541430 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:21.967295 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:21.967307 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:21.987090 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:22.041604 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:22.467429 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:22.471332 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:22.487735 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:22.541900 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:22.966311 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:22.968052 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:22.988245 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:23.042392 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:23.467397 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:23.468586 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:23.487755 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:23.568183 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:23.966834 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:23.968042 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:23.987813 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:24.041695 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:24.467268 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:24.467575 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:24.487456 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:24.541389 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:24.966701 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:24.968200 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:24.987129 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:25.041792 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:25.467032 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:25.467630 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:25.487508 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:25.541337 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:25.967313 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:25.968384 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:25.987641 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:26.041487 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:26.473463 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:26.473551 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:26.488988 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:26.542065 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:26.966584 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:26.967859 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:26.988690 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:27.041101 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:27.468243 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:27.468267 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:27.488238 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:27.541262 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:27.967662 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:27.967710 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:27.987648 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:28.041058 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:28.467298 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:28.467956 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:28.488617 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:28.568296 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:28.966938 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:28.968111 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:28.987982 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:29.041987 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:29.466682 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:29.467899 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:29.488311 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:29.541016 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:29.967240 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:29.968123 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:29.987789 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:30.041789 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:30.466523 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:30.467817 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:30.487758 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:30.541455 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:30.967383 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:30.968502 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:30.987892 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:31.041663 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:31.467340 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:31.468178 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:31.487481 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:31.541132 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:31.966983 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:31.968466 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:31.988149 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:32.042403 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:32.467720 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:32.468820 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:32.568034 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:32.568145 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:32.966717 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:32.968200 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:32.987661 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:33.041493 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:33.467449 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:33.468497 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:33.488201 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:33.541891 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:33.967273 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:33.967334 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:33.987440 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:34.041067 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:34.467724 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:34.467731 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:34.488370 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:34.541451 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:34.967125 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:34.968958 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:34.988174 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:35.042302 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:35.467649 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:35.468475 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:35.568238 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:35.568345 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:35.967658 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:35.968399 1674764 kapi.go:107] duration metric: took 44.003458125s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 11:16:35.988151 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:36.042236 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:36.467244 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:36.568123 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:36.568488 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:36.966474 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:36.987492 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:37.041174 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:37.467156 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:37.488774 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:37.541864 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:37.967906 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:37.988075 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:38.041705 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:38.467082 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:38.488651 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:38.541500 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:38.967169 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:38.988108 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:39.042689 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:39.467090 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:39.488595 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:39.541268 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:39.967424 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:39.987319 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:40.040784 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:40.466933 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:40.488678 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:40.567817 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:40.967446 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:40.987599 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:41.041906 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:41.466392 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:41.487367 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:41.541016 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:41.966859 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:41.987658 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:42.041581 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:42.466845 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:42.567882 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:42.567944 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:42.967129 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:43.067898 1674764 kapi.go:107] duration metric: took 44.529885953s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 11:16:43.068760 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:43.071213 1674764 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-767877 cluster.
	I1217 11:16:43.072769 1674764 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 11:16:43.074331 1674764 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 11:16:43.466731 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:43.489493 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:43.968131 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:43.988795 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:44.467197 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:44.487679 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:44.967778 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:44.988131 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:45.467424 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:45.488014 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:45.968226 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:45.987856 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:46.467751 1674764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:46.488516 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:46.966937 1674764 kapi.go:107] duration metric: took 55.003833961s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 11:16:46.987769 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:47.488204 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:48.000423 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:48.488206 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:48.988276 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:49.488205 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:49.988089 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:50.488711 1674764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:50.987396 1674764 kapi.go:107] duration metric: took 58.503448344s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 11:16:50.989012 1674764 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, amd-gpu-device-plugin, registry-creds, default-storageclass, ingress-dns, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1217 11:16:50.990175 1674764 addons.go:530] duration metric: took 1m0.782260551s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner amd-gpu-device-plugin registry-creds default-storageclass ingress-dns inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1217 11:16:50.990240 1674764 start.go:247] waiting for cluster config update ...
	I1217 11:16:50.990279 1674764 start.go:256] writing updated cluster config ...
	I1217 11:16:50.990621 1674764 ssh_runner.go:195] Run: rm -f paused
	I1217 11:16:50.994841 1674764 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:16:50.998057 1674764 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bk7js" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.002677 1674764 pod_ready.go:94] pod "coredns-66bc5c9577-bk7js" is "Ready"
	I1217 11:16:51.002706 1674764 pod_ready.go:86] duration metric: took 4.62363ms for pod "coredns-66bc5c9577-bk7js" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.004751 1674764 pod_ready.go:83] waiting for pod "etcd-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.008726 1674764 pod_ready.go:94] pod "etcd-addons-767877" is "Ready"
	I1217 11:16:51.008748 1674764 pod_ready.go:86] duration metric: took 3.974452ms for pod "etcd-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.010693 1674764 pod_ready.go:83] waiting for pod "kube-apiserver-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.014453 1674764 pod_ready.go:94] pod "kube-apiserver-addons-767877" is "Ready"
	I1217 11:16:51.014474 1674764 pod_ready.go:86] duration metric: took 3.762696ms for pod "kube-apiserver-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.016703 1674764 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.399436 1674764 pod_ready.go:94] pod "kube-controller-manager-addons-767877" is "Ready"
	I1217 11:16:51.399467 1674764 pod_ready.go:86] duration metric: took 382.738741ms for pod "kube-controller-manager-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.598579 1674764 pod_ready.go:83] waiting for pod "kube-proxy-dmglt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:51.999665 1674764 pod_ready.go:94] pod "kube-proxy-dmglt" is "Ready"
	I1217 11:16:51.999701 1674764 pod_ready.go:86] duration metric: took 401.089361ms for pod "kube-proxy-dmglt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:52.199105 1674764 pod_ready.go:83] waiting for pod "kube-scheduler-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:52.598472 1674764 pod_ready.go:94] pod "kube-scheduler-addons-767877" is "Ready"
	I1217 11:16:52.598507 1674764 pod_ready.go:86] duration metric: took 399.370525ms for pod "kube-scheduler-addons-767877" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:16:52.598522 1674764 pod_ready.go:40] duration metric: took 1.603645348s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:16:52.648093 1674764 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:16:52.650313 1674764 out.go:179] * Done! kubectl is now configured to use "addons-767877" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 11:17:19 addons-767877 crio[774]: time="2025-12-17T11:17:19.013098375Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Dec 17 11:17:19 addons-767877 crio[774]: time="2025-12-17T11:17:19.035048467Z" level=info msg="Stopped pod sandbox: 467c3d3de698c9360f6b9b47bac0ab31254e475750b268474f2ca61addc65eed" id=be13836f-f00c-456f-a016-4f907fdb4ec0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.703622368Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1/POD" id=54596311-d9c5-4bff-92ad-41202d68a755 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.703717143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.711925248Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1 Namespace:local-path-storage ID:4e864e0ecc42b9a7d4b96d046f9594431c3eaf482f33eada416103ea71639301 UID:4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9 NetNS:/var/run/netns/3ed51e2a-b749-4214-a1e5-403411a362a1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000dfe498}] Aliases:map[]}"
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.712166394Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1 to CNI network \"kindnet\" (type=ptp)"
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.72380152Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1 Namespace:local-path-storage ID:4e864e0ecc42b9a7d4b96d046f9594431c3eaf482f33eada416103ea71639301 UID:4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9 NetNS:/var/run/netns/3ed51e2a-b749-4214-a1e5-403411a362a1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000dfe498}] Aliases:map[]}"
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.72398746Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1 for CNI network kindnet (type=ptp)"
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.725194914Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.726407471Z" level=info msg="Ran pod sandbox 4e864e0ecc42b9a7d4b96d046f9594431c3eaf482f33eada416103ea71639301 with infra container: local-path-storage/helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1/POD" id=54596311-d9c5-4bff-92ad-41202d68a755 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.727983993Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=aaa608f2-8e34-45b4-8efb-1937a65e4552 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.729239837Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=c8aea8ae-c628-4d98-aa36-fc554b1c6ed4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.732824663Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1/helper-pod" id=441d1293-a496-4d96-8b03-f99c9f4fe482 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.733012655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.739762573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.740461448Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.772313633Z" level=info msg="Created container 4a9010e0482c845bd682df4af99415e45a1d515ce547b76af8cada32ef8f9abc: local-path-storage/helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1/helper-pod" id=441d1293-a496-4d96-8b03-f99c9f4fe482 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.773192084Z" level=info msg="Starting container: 4a9010e0482c845bd682df4af99415e45a1d515ce547b76af8cada32ef8f9abc" id=03ae5d0c-2966-4daf-9ed0-465cf4a07c16 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:17:20 addons-767877 crio[774]: time="2025-12-17T11:17:20.775268879Z" level=info msg="Started container" PID=7685 containerID=4a9010e0482c845bd682df4af99415e45a1d515ce547b76af8cada32ef8f9abc description=local-path-storage/helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1/helper-pod id=03ae5d0c-2966-4daf-9ed0-465cf4a07c16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e864e0ecc42b9a7d4b96d046f9594431c3eaf482f33eada416103ea71639301
	Dec 17 11:17:22 addons-767877 crio[774]: time="2025-12-17T11:17:22.031608139Z" level=info msg="Stopping pod sandbox: 4e864e0ecc42b9a7d4b96d046f9594431c3eaf482f33eada416103ea71639301" id=d13ef964-1e5f-4d79-a701-a0377ebd37b1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 11:17:22 addons-767877 crio[774]: time="2025-12-17T11:17:22.031904544Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1 Namespace:local-path-storage ID:4e864e0ecc42b9a7d4b96d046f9594431c3eaf482f33eada416103ea71639301 UID:4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9 NetNS:/var/run/netns/3ed51e2a-b749-4214-a1e5-403411a362a1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00027e8b0}] Aliases:map[]}"
	Dec 17 11:17:22 addons-767877 crio[774]: time="2025-12-17T11:17:22.032032477Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1 from CNI network \"kindnet\" (type=ptp)"
	Dec 17 11:17:22 addons-767877 crio[774]: time="2025-12-17T11:17:22.050499968Z" level=info msg="Stopped pod sandbox: 4e864e0ecc42b9a7d4b96d046f9594431c3eaf482f33eada416103ea71639301" id=d13ef964-1e5f-4d79-a701-a0377ebd37b1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 11:17:23 addons-767877 crio[774]: time="2025-12-17T11:17:23.038805839Z" level=info msg="Removing container: 4a9010e0482c845bd682df4af99415e45a1d515ce547b76af8cada32ef8f9abc" id=e4758923-14b5-478c-94b5-d2a1925f632d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:17:23 addons-767877 crio[774]: time="2025-12-17T11:17:23.045437984Z" level=info msg="Removed container 4a9010e0482c845bd682df4af99415e45a1d515ce547b76af8cada32ef8f9abc: local-path-storage/helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1/helper-pod" id=e4758923-14b5-478c-94b5-d2a1925f632d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	d8e2a332e6a18       docker.io/library/busybox@sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737                                            9 seconds ago        Exited              busybox                                  0                   467c3d3de698c       test-local-path                                              default
	6a14d06648a4e       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            14 seconds ago       Exited              helper-pod                               0                   d8ad9741a453c       helper-pod-create-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1   local-path-storage
	5772d128310e6       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           16 seconds ago       Running             nginx                                    0                   0b2f1d9d743bd       nginx                                                        default
	f15ef20302864       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          31 seconds ago       Running             busybox                                  0                   7a852f2314b40       busybox                                                      default
	960e339dfeb9d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          36 seconds ago       Running             csi-snapshotter                          0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                                     kube-system
	45dced8160416       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          37 seconds ago       Running             csi-provisioner                          0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                                     kube-system
	fa2ebcf83b879       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            38 seconds ago       Running             liveness-probe                           0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                                     kube-system
	49ba8a4cf9b16       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           39 seconds ago       Running             hostpath                                 0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                                     kube-system
	4b8f30633c332       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                40 seconds ago       Running             node-driver-registrar                    0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                                     kube-system
	7ac64d7ceb05c       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             41 seconds ago       Running             controller                               0                   7fb8626a1771c       ingress-nginx-controller-85d4c799dd-z2vvn                    ingress-nginx
	2c0beda5bf110       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             41 seconds ago       Exited              patch                                    2                   b5439eeb171ad       gcp-auth-certs-patch-zmx9n                                   gcp-auth
	c6b8c1b1547d4       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             41 seconds ago       Exited              patch                                    2                   3b3e4c3bd47db       ingress-nginx-admission-patch-6dj9n                          ingress-nginx
	25d633e9324b0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 44 seconds ago       Running             gcp-auth                                 0                   63c06dbe6c307       gcp-auth-78565c9fb4-cbs85                                    gcp-auth
	7fce542d2390c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            47 seconds ago       Running             gadget                                   0                   8d456ea4642ab       gadget-8cr2g                                                 gadget
	29bb23388cfae       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   51 seconds ago       Running             csi-external-health-monitor-controller   0                   a5d9237c27e8f       csi-hostpathplugin-swlsr                                     kube-system
	2019334dda3cb       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              51 seconds ago       Running             registry-proxy                           0                   ee3b8ab6c4ac3       registry-proxy-ffwc5                                         kube-system
	b9d0b96855eae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   54 seconds ago       Exited              create                                   0                   a09f45d601a2c       gcp-auth-certs-create-nv4bv                                  gcp-auth
	a039ab85e94e7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      54 seconds ago       Running             volume-snapshot-controller               0                   47590040e4a77       snapshot-controller-7d9fbc56b8-dm88z                         kube-system
	6e7d200d76d2d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   54 seconds ago       Exited              create                                   0                   0fb1ffad3df66       ingress-nginx-admission-create-5zpxl                         ingress-nginx
	743ec64dbbba0       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     55 seconds ago       Running             amd-gpu-device-plugin                    0                   9551622c5fad2       amd-gpu-device-plugin-54g7h                                  kube-system
	27d01bff29030       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      57 seconds ago       Running             volume-snapshot-controller               0                   b6acc084613f2       snapshot-controller-7d9fbc56b8-2jdlm                         kube-system
	85d34444a3d52       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              58 seconds ago       Running             csi-resizer                              0                   cb12959d3c61a       csi-hostpath-resizer-0                                       kube-system
	b486bd1049fb4       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     59 seconds ago       Running             nvidia-device-plugin-ctr                 0                   1653853ccb50d       nvidia-device-plugin-daemonset-29qcw                         kube-system
	7894f028137e7       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   c3d5551c84036       registry-6b586f9694-lc6z2                                    kube-system
	7ffd867e55d61       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              About a minute ago   Running             yakd                                     0                   3d142fa0be2dc       yakd-dashboard-6654c87f9b-bb445                              yakd-dashboard
	f0b0e753b5cc2       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   29c170e3069ba       local-path-provisioner-648f6765c9-wwkwd                      local-path-storage
	a9e2a2f02ae68       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   ffdf6ba701f8b       csi-hostpath-attacher-0                                      kube-system
	99be406f81626       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   90196c535089c       kube-ingress-dns-minikube                                    kube-system
	3d6dc27d27364       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   58675b31b34b0       metrics-server-85b7d694d7-q89cn                              kube-system
	f9330c6d46d57       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               About a minute ago   Running             cloud-spanner-emulator                   0                   95e517497e177       cloud-spanner-emulator-5bdddb765-v9nvg                       default
	d7add53e16ff4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   742aaf64d4d43       coredns-66bc5c9577-bk7js                                     kube-system
	710c232068b61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   c5f4de2b5aafd       storage-provisioner                                          kube-system
	27822a03994e6       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           About a minute ago   Running             kindnet-cni                              0                   e5f8a0b22f605       kindnet-nkfjh                                                kube-system
	e8a13ad739d84       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             About a minute ago   Running             kube-proxy                               0                   db9b88d602258       kube-proxy-dmglt                                             kube-system
	9f8c99a2db49b       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             About a minute ago   Running             kube-controller-manager                  0                   46ffae26bdc9f       kube-controller-manager-addons-767877                        kube-system
	d01e74fe7a95c       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             About a minute ago   Running             kube-apiserver                           0                   989b7a46df257       kube-apiserver-addons-767877                                 kube-system
	f965996f6131f       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             About a minute ago   Running             kube-scheduler                           0                   01c99f82da5ab       kube-scheduler-addons-767877                                 kube-system
	59bd12719079a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   7ed567bb5f749       etcd-addons-767877                                           kube-system
	
	
	==> coredns [d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370] <==
	[INFO] 10.244.0.15:52497 - 24660 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000176687s
	[INFO] 10.244.0.15:53903 - 10489 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000079507s
	[INFO] 10.244.0.15:53903 - 10763 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000139808s
	[INFO] 10.244.0.15:51591 - 28167 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000076435s
	[INFO] 10.244.0.15:51591 - 27959 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00005832s
	[INFO] 10.244.0.15:53734 - 15714 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000058713s
	[INFO] 10.244.0.15:53734 - 15897 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000105415s
	[INFO] 10.244.0.15:46293 - 49039 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093232s
	[INFO] 10.244.0.15:46293 - 48651 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121568s
	[INFO] 10.244.0.21:46209 - 13700 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000256417s
	[INFO] 10.244.0.21:49829 - 63525 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000323755s
	[INFO] 10.244.0.21:36515 - 23907 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151412s
	[INFO] 10.244.0.21:40055 - 28063 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000224317s
	[INFO] 10.244.0.21:37670 - 30048 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167458s
	[INFO] 10.244.0.21:39302 - 49541 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104934s
	[INFO] 10.244.0.21:36984 - 44461 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005624542s
	[INFO] 10.244.0.21:53572 - 1653 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006141938s
	[INFO] 10.244.0.21:43092 - 41379 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004756868s
	[INFO] 10.244.0.21:37902 - 6926 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006645864s
	[INFO] 10.244.0.21:40449 - 40331 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004319917s
	[INFO] 10.244.0.21:55291 - 14990 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007435945s
	[INFO] 10.244.0.21:38506 - 41843 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000928567s
	[INFO] 10.244.0.21:41693 - 17276 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002469617s
	[INFO] 10.244.0.26:38811 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000212436s
	[INFO] 10.244.0.26:49476 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000196459s
	
	
	==> describe nodes <==
	Name:               addons-767877
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-767877
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=addons-767877
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_15_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-767877
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-767877"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:15:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-767877
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:17:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:17:26 +0000   Wed, 17 Dec 2025 11:15:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:17:26 +0000   Wed, 17 Dec 2025 11:15:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:17:26 +0000   Wed, 17 Dec 2025 11:15:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:17:26 +0000   Wed, 17 Dec 2025 11:16:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-767877
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                6f0cef8c-aca7-4308-b71d-bd92de2642d5
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     cloud-spanner-emulator-5bdddb765-v9nvg       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  gadget                      gadget-8cr2g                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  gcp-auth                    gcp-auth-78565c9fb4-cbs85                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-z2vvn    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         96s
	  kube-system                 amd-gpu-device-plugin-54g7h                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 coredns-66bc5c9577-bk7js                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     97s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 csi-hostpathplugin-swlsr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 etcd-addons-767877                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         103s
	  kube-system                 kindnet-nkfjh                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      98s
	  kube-system                 kube-apiserver-addons-767877                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-addons-767877        200m (2%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-dmglt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-scheduler-addons-767877                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 metrics-server-85b7d694d7-q89cn              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         96s
	  kube-system                 nvidia-device-plugin-daemonset-29qcw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 registry-6b586f9694-lc6z2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 registry-creds-764b6fb674-crd5v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 registry-proxy-ffwc5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 snapshot-controller-7d9fbc56b8-2jdlm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 snapshot-controller-7d9fbc56b8-dm88z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  local-path-storage          local-path-provisioner-648f6765c9-wwkwd      0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-bb445              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node addons-767877 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node addons-767877 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x8 over 108s)  kubelet          Node addons-767877 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node addons-767877 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node addons-767877 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node addons-767877 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           99s                  node-controller  Node addons-767877 event: Registered Node addons-767877 in Controller
	  Normal  NodeReady                84s                  kubelet          Node addons-767877 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa c0 d7 5e c5 70 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 98 e5 98 3a 77 08 06
	[Dec17 10:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a 5a 7c d5 42 6f 08 06
	[  +0.039552] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[ +17.490571] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875] <==
	{"level":"warn","ts":"2025-12-17T11:15:41.348076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.354801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.361609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.368368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.374871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.382579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.389425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.396752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.404284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.411761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.418341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.432478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.439224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.446062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:41.497513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:53.013917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:15:53.020793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:16:15.701617Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.290066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:16:15.701725Z","caller":"traceutil/trace.go:172","msg":"trace[1360142914] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1008; }","duration":"127.411285ms","start":"2025-12-17T11:16:15.574298Z","end":"2025-12-17T11:16:15.701709Z","steps":["trace[1360142914] 'range keys from in-memory index tree'  (duration: 127.201575ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:16:15.812520Z","caller":"traceutil/trace.go:172","msg":"trace[2089671935] transaction","detail":"{read_only:false; response_revision:1009; number_of_response:1; }","duration":"102.281535ms","start":"2025-12-17T11:16:15.710219Z","end":"2025-12-17T11:16:15.812500Z","steps":["trace[2089671935] 'process raft request'  (duration: 102.143069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T11:16:18.939917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:16:18.946507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:16:18.962085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:16:18.969440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T11:16:48.206890Z","caller":"traceutil/trace.go:172","msg":"trace[1891087675] transaction","detail":"{read_only:false; response_revision:1219; number_of_response:1; }","duration":"164.272878ms","start":"2025-12-17T11:16:48.042597Z","end":"2025-12-17T11:16:48.206870Z","steps":["trace[1891087675] 'process raft request'  (duration: 100.622087ms)","trace[1891087675] 'compare'  (duration: 63.540334ms)"],"step_count":2}
	
	
	==> gcp-auth [25d633e9324b084a2402d79743b4434abf41696f0d1cab8205102f1e7493f3dd] <==
	2025/12/17 11:16:42 GCP Auth Webhook started!
	2025/12/17 11:16:53 Ready to marshal response ...
	2025/12/17 11:16:53 Ready to write response ...
	2025/12/17 11:16:53 Ready to marshal response ...
	2025/12/17 11:16:53 Ready to write response ...
	2025/12/17 11:16:53 Ready to marshal response ...
	2025/12/17 11:16:53 Ready to write response ...
	2025/12/17 11:17:08 Ready to marshal response ...
	2025/12/17 11:17:08 Ready to write response ...
	2025/12/17 11:17:08 Ready to marshal response ...
	2025/12/17 11:17:08 Ready to write response ...
	2025/12/17 11:17:08 Ready to marshal response ...
	2025/12/17 11:17:08 Ready to write response ...
	2025/12/17 11:17:11 Ready to marshal response ...
	2025/12/17 11:17:11 Ready to write response ...
	2025/12/17 11:17:20 Ready to marshal response ...
	2025/12/17 11:17:20 Ready to write response ...
	
	
	==> kernel <==
	 11:17:27 up  4:59,  0 user,  load average: 1.24, 0.83, 1.22
	Linux addons-767877 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766] <==
	I1217 11:15:53.269317       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:15:53.269352       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:15:53.269365       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:15:53.269492       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:15:53.669500       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:15:53.669558       1 metrics.go:72] Registering metrics
	I1217 11:15:53.669675       1 controller.go:711] "Syncing nftables rules"
	I1217 11:16:03.265939       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:16:03.266022       1 main.go:301] handling current node
	I1217 11:16:13.262832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:16:13.262883       1 main.go:301] handling current node
	I1217 11:16:23.262313       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:16:23.262365       1 main.go:301] handling current node
	I1217 11:16:33.262292       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:16:33.262370       1 main.go:301] handling current node
	I1217 11:16:43.262890       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:16:43.262944       1 main.go:301] handling current node
	I1217 11:16:53.262270       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:16:53.262324       1 main.go:301] handling current node
	I1217 11:17:03.266469       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:17:03.266516       1 main.go:301] handling current node
	I1217 11:17:13.262590       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:17:13.262626       1 main.go:301] handling current node
	I1217 11:17:23.262624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 11:17:23.262662       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc] <==
	E1217 11:16:11.686636       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.1.140:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.1.140:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.1.140:443: connect: connection refused" logger="UnhandledError"
	W1217 11:16:12.687234       1 handler_proxy.go:99] no RequestInfo found in the context
	W1217 11:16:12.687259       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 11:16:12.687293       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1217 11:16:12.687307       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1217 11:16:12.687317       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1217 11:16:12.688452       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1217 11:16:16.695896       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 11:16:16.695965       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 11:16:16.695972       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.1.140:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.1.140:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1217 11:16:16.704298       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1217 11:16:18.939802       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 11:16:18.946458       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 11:16:18.961994       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 11:16:18.969438       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1217 11:17:01.349560       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60660: use of closed network connection
	E1217 11:17:01.514460       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60690: use of closed network connection
	I1217 11:17:08.320228       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 11:17:08.547301       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.173.226"}
	
	
	==> kube-controller-manager [9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a] <==
	I1217 11:15:48.924954       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 11:15:48.925002       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 11:15:48.925385       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 11:15:48.925405       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 11:15:48.925436       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 11:15:48.925487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 11:15:48.925564       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 11:15:48.925645       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 11:15:48.925658       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 11:15:48.925788       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 11:15:48.925796       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 11:15:48.926007       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 11:15:48.926138       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 11:15:48.926702       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 11:15:48.927696       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 11:15:48.929011       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 11:15:48.929702       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:15:48.947563       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 11:15:51.513421       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1217 11:16:03.926927       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1217 11:16:18.933939       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 11:16:18.934004       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 11:16:18.956166       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 11:16:19.034635       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:16:19.056856       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989] <==
	I1217 11:15:50.593470       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:15:50.800615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:15:51.006495       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:15:51.006564       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 11:15:51.006681       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:15:51.407580       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:15:51.407778       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:15:51.431965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:15:51.432708       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:15:51.433053       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:15:51.445255       1 config.go:200] "Starting service config controller"
	I1217 11:15:51.445284       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:15:51.445366       1 config.go:309] "Starting node config controller"
	I1217 11:15:51.445384       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:15:51.445429       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:15:51.445436       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:15:51.445457       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:15:51.445463       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:15:51.551780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:15:51.551825       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:15:51.551838       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:15:51.554122       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86] <==
	E1217 11:15:41.946769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 11:15:41.946803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 11:15:41.946862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:15:41.946908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:15:41.946957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:15:41.946962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:15:41.947028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 11:15:41.947085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 11:15:41.947094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 11:15:42.826997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:15:42.841598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 11:15:42.849721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:15:42.859193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:15:42.881669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 11:15:42.931846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 11:15:42.965928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 11:15:42.987138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 11:15:43.019464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 11:15:43.052439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 11:15:43.106614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 11:15:43.156344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 11:15:43.230691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 11:15:43.252210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:15:43.274608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1217 11:15:46.043459       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:17:19 addons-767877 kubelet[1307]: I1217 11:17:19.091396    1307 reconciler_common.go:299] "Volume detached for volume \"pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1\" (UniqueName: \"kubernetes.io/host-path/d9aacc6d-b566-46d9-83fe-f3ce87a6c996-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1\") on node \"addons-767877\" DevicePath \"\""
	Dec 17 11:17:19 addons-767877 kubelet[1307]: I1217 11:17:19.091409    1307 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d9aacc6d-b566-46d9-83fe-f3ce87a6c996-gcp-creds\") on node \"addons-767877\" DevicePath \"\""
	Dec 17 11:17:19 addons-767877 kubelet[1307]: I1217 11:17:19.093469    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9aacc6d-b566-46d9-83fe-f3ce87a6c996-kube-api-access-nhxk4" (OuterVolumeSpecName: "kube-api-access-nhxk4") pod "d9aacc6d-b566-46d9-83fe-f3ce87a6c996" (UID: "d9aacc6d-b566-46d9-83fe-f3ce87a6c996"). InnerVolumeSpecName "kube-api-access-nhxk4". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 11:17:19 addons-767877 kubelet[1307]: I1217 11:17:19.191804    1307 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nhxk4\" (UniqueName: \"kubernetes.io/projected/d9aacc6d-b566-46d9-83fe-f3ce87a6c996-kube-api-access-nhxk4\") on node \"addons-767877\" DevicePath \"\""
	Dec 17 11:17:20 addons-767877 kubelet[1307]: I1217 11:17:20.019597    1307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="467c3d3de698c9360f6b9b47bac0ab31254e475750b268474f2ca61addc65eed"
	Dec 17 11:17:20 addons-767877 kubelet[1307]: I1217 11:17:20.501852    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-gcp-creds\") pod \"helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1\" (UID: \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\") " pod="local-path-storage/helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1"
	Dec 17 11:17:20 addons-767877 kubelet[1307]: I1217 11:17:20.501926    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtppd\" (UniqueName: \"kubernetes.io/projected/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-kube-api-access-xtppd\") pod \"helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1\" (UID: \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\") " pod="local-path-storage/helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1"
	Dec 17 11:17:20 addons-767877 kubelet[1307]: I1217 11:17:20.502111    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-script\") pod \"helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1\" (UID: \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\") " pod="local-path-storage/helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1"
	Dec 17 11:17:20 addons-767877 kubelet[1307]: I1217 11:17:20.502164    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-data\") pod \"helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1\" (UID: \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\") " pod="local-path-storage/helper-pod-delete-pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1"
	Dec 17 11:17:20 addons-767877 kubelet[1307]: I1217 11:17:20.532734    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c4a570c-4540-48a6-877d-f7c473afa7a1" path="/var/lib/kubelet/pods/8c4a570c-4540-48a6-877d-f7c473afa7a1/volumes"
	Dec 17 11:17:20 addons-767877 kubelet[1307]: I1217 11:17:20.533237    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9aacc6d-b566-46d9-83fe-f3ce87a6c996" path="/var/lib/kubelet/pods/d9aacc6d-b566-46d9-83fe-f3ce87a6c996/volumes"
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.116955    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-data\") pod \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\" (UID: \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\") "
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.117010    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-script\") pod \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\" (UID: \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\") "
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.117053    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtppd\" (UniqueName: \"kubernetes.io/projected/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-kube-api-access-xtppd\") pod \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\" (UID: \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\") "
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.117076    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-gcp-creds\") pod \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\" (UID: \"4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9\") "
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.117132    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-data" (OuterVolumeSpecName: "data") pod "4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9" (UID: "4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.117192    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9" (UID: "4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.117276    1307 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-gcp-creds\") on node \"addons-767877\" DevicePath \"\""
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.117294    1307 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-data\") on node \"addons-767877\" DevicePath \"\""
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.117469    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-script" (OuterVolumeSpecName: "script") pod "4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9" (UID: "4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.119429    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-kube-api-access-xtppd" (OuterVolumeSpecName: "kube-api-access-xtppd") pod "4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9" (UID: "4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9"). InnerVolumeSpecName "kube-api-access-xtppd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.218403    1307 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-script\") on node \"addons-767877\" DevicePath \"\""
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.218456    1307 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xtppd\" (UniqueName: \"kubernetes.io/projected/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9-kube-api-access-xtppd\") on node \"addons-767877\" DevicePath \"\""
	Dec 17 11:17:22 addons-767877 kubelet[1307]: I1217 11:17:22.532755    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9" path="/var/lib/kubelet/pods/4bdf8d45-1c03-4ceb-b08f-dd3242a0fda9/volumes"
	Dec 17 11:17:23 addons-767877 kubelet[1307]: I1217 11:17:23.037506    1307 scope.go:117] "RemoveContainer" containerID="4a9010e0482c845bd682df4af99415e45a1d515ce547b76af8cada32ef8f9abc"
	
	
	==> storage-provisioner [710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa] <==
	W1217 11:17:02.586929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:04.590693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:04.594818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:06.598710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:06.602860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:08.605957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:08.610767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:10.614476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:10.619352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:12.622470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:12.627887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:14.632333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:14.637072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:16.641160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:16.645698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:18.649214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:18.655872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:20.660215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:20.664728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:22.667987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:22.673701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:24.677626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:24.682495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:26.686036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:17:26.689899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-767877 -n addons-767877
helpers_test.go:270: (dbg) Run:  kubectl --context addons-767877 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-5zpxl ingress-nginx-admission-patch-6dj9n registry-creds-764b6fb674-crd5v
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-767877 describe pod ingress-nginx-admission-create-5zpxl ingress-nginx-admission-patch-6dj9n registry-creds-764b6fb674-crd5v
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-767877 describe pod ingress-nginx-admission-create-5zpxl ingress-nginx-admission-patch-6dj9n registry-creds-764b6fb674-crd5v: exit status 1 (64.855045ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5zpxl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6dj9n" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-crd5v" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-767877 describe pod ingress-nginx-admission-create-5zpxl ingress-nginx-admission-patch-6dj9n registry-creds-764b6fb674-crd5v: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable headlamp --alsologtostderr -v=1: exit status 11 (259.875651ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:28.526024 1686263 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:28.526287 1686263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:28.526298 1686263 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:28.526303 1686263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:28.526585 1686263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:28.526963 1686263 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:28.527320 1686263 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:28.527342 1686263 addons.go:622] checking whether the cluster is paused
	I1217 11:17:28.527448 1686263 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:28.527470 1686263 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:28.527939 1686263 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:28.548388 1686263 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:28.548461 1686263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:28.567149 1686263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:28.662954 1686263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:28.663055 1686263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:28.695147 1686263 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:28.695173 1686263 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:28.695179 1686263 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:28.695185 1686263 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:28.695189 1686263 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:28.695195 1686263 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:28.695199 1686263 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:28.695204 1686263 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:28.695208 1686263 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:28.695214 1686263 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:28.695218 1686263 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:28.695222 1686263 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:28.695227 1686263 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:28.695232 1686263 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:28.695237 1686263 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:28.695251 1686263 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:28.695259 1686263 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:28.695265 1686263 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:28.695270 1686263 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:28.695274 1686263 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:28.695279 1686263 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:28.695283 1686263 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:28.695288 1686263 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:28.695292 1686263 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:28.695296 1686263 cri.go:89] found id: ""
	I1217 11:17:28.695342 1686263 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:28.711143 1686263 out.go:203] 
	W1217 11:17:28.712594 1686263 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:28.712627 1686263 out.go:285] * 
	* 
	W1217 11:17:28.719055 1686263 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:28.720977 1686263 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-v9nvg" [138d1d7b-65b6-46ce-a074-af08a4720eca] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003218372s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (289.831105ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:27.818044 1686016 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:27.818201 1686016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:27.818213 1686016 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:27.818219 1686016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:27.818447 1686016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:27.818840 1686016 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:27.819303 1686016 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:27.819333 1686016 addons.go:622] checking whether the cluster is paused
	I1217 11:17:27.819478 1686016 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:27.819498 1686016 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:27.820070 1686016 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:27.840713 1686016 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:27.840790 1686016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:27.864681 1686016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:27.967502 1686016 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:27.967653 1686016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:28.009452 1686016 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:28.009481 1686016 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:28.009487 1686016 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:28.009492 1686016 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:28.009496 1686016 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:28.009591 1686016 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:28.009598 1686016 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:28.009602 1686016 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:28.009607 1686016 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:28.009644 1686016 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:28.009653 1686016 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:28.009658 1686016 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:28.009663 1686016 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:28.009667 1686016 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:28.009672 1686016 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:28.009779 1686016 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:28.009795 1686016 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:28.009803 1686016 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:28.009808 1686016 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:28.009813 1686016 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:28.009826 1686016 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:28.009834 1686016 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:28.009839 1686016 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:28.009844 1686016 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:28.009859 1686016 cri.go:89] found id: ""
	I1217 11:17:28.009917 1686016 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:28.027263 1686016 out.go:203] 
	W1217 11:17:28.029042 1686016 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:28.029069 1686016 out.go:285] * 
	* 
	W1217 11:17:28.036126 1686016 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:28.039385 1686016 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-767877 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-767877 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-767877 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [d9aacc6d-b566-46d9-83fe-f3ce87a6c996] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [d9aacc6d-b566-46d9-83fe-f3ce87a6c996] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [d9aacc6d-b566-46d9-83fe-f3ce87a6c996] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003699379s
addons_test.go:969: (dbg) Run:  kubectl --context addons-767877 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 ssh "cat /opt/local-path-provisioner/pvc-0f700a03-e387-4cdf-b643-426d00a4a6d1_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-767877 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-767877 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (266.893492ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:20.528577 1685090 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:20.528746 1685090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:20.528758 1685090 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:20.528762 1685090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:20.529033 1685090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:20.529385 1685090 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:20.529832 1685090 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:20.529858 1685090 addons.go:622] checking whether the cluster is paused
	I1217 11:17:20.529986 1685090 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:20.530004 1685090 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:20.530632 1685090 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:20.550194 1685090 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:20.550275 1685090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:20.569494 1685090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:20.665847 1685090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:20.665924 1685090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:20.698657 1685090 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:20.698686 1685090 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:20.698693 1685090 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:20.698705 1685090 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:20.698708 1685090 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:20.698712 1685090 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:20.698715 1685090 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:20.698718 1685090 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:20.698720 1685090 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:20.698727 1685090 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:20.698730 1685090 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:20.698733 1685090 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:20.698735 1685090 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:20.698738 1685090 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:20.698741 1685090 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:20.698749 1685090 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:20.698755 1685090 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:20.698759 1685090 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:20.698762 1685090 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:20.698764 1685090 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:20.698769 1685090 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:20.698772 1685090 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:20.698774 1685090 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:20.698777 1685090 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:20.698779 1685090 cri.go:89] found id: ""
	I1217 11:17:20.698821 1685090 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:20.716805 1685090 out.go:203] 
	W1217 11:17:20.718573 1685090 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:20.718617 1685090 out.go:285] * 
	* 
	W1217 11:17:20.726322 1685090 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:20.728540 1685090 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (12.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-29qcw" [126659ae-963b-4c25-b391-6b0e5bc691f9] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004292247s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (260.91931ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:25.799437 1685358 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:25.799769 1685358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:25.799780 1685358 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:25.799785 1685358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:25.800045 1685358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:25.800398 1685358 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:25.800789 1685358 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:25.800814 1685358 addons.go:622] checking whether the cluster is paused
	I1217 11:17:25.800918 1685358 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:25.800935 1685358 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:25.801339 1685358 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:25.820249 1685358 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:25.820308 1685358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:25.839050 1685358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:25.933231 1685358 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:25.933336 1685358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:25.968227 1685358 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:25.968255 1685358 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:25.968261 1685358 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:25.968266 1685358 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:25.968270 1685358 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:25.968274 1685358 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:25.968277 1685358 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:25.968280 1685358 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:25.968283 1685358 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:25.968292 1685358 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:25.968295 1685358 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:25.968299 1685358 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:25.968302 1685358 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:25.968305 1685358 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:25.968308 1685358 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:25.968327 1685358 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:25.968333 1685358 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:25.968338 1685358 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:25.968341 1685358 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:25.968344 1685358 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:25.968347 1685358 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:25.968349 1685358 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:25.968353 1685358 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:25.968355 1685358 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:25.968358 1685358 cri.go:89] found id: ""
	I1217 11:17:25.968397 1685358 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:25.986712 1685358 out.go:203] 
	W1217 11:17:25.988080 1685358 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:25.988114 1685358 out.go:285] * 
	* 
	W1217 11:17:25.994831 1685358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:25.996314 1685358 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-bb445" [846971a5-87d5-4759-92fc-2021629c45bd] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003845017s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable yakd --alsologtostderr -v=1: exit status 11 (259.610133ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:22.547424 1685236 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:22.547693 1685236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:22.547702 1685236 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:22.547707 1685236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:22.547908 1685236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:22.548183 1685236 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:22.548519 1685236 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:22.548550 1685236 addons.go:622] checking whether the cluster is paused
	I1217 11:17:22.548633 1685236 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:22.548646 1685236 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:22.549072 1685236 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:22.567668 1685236 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:22.567746 1685236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:22.586554 1685236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:22.681233 1685236 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:22.681322 1685236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:22.716551 1685236 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:22.716620 1685236 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:22.716629 1685236 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:22.716633 1685236 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:22.716636 1685236 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:22.716640 1685236 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:22.716643 1685236 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:22.716646 1685236 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:22.716649 1685236 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:22.716657 1685236 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:22.716663 1685236 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:22.716666 1685236 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:22.716669 1685236 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:22.716671 1685236 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:22.716674 1685236 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:22.716683 1685236 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:22.716688 1685236 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:22.716693 1685236 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:22.716696 1685236 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:22.716698 1685236 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:22.716701 1685236 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:22.716704 1685236 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:22.716706 1685236 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:22.716709 1685236 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:22.716712 1685236 cri.go:89] found id: ""
	I1217 11:17:22.716758 1685236 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:22.732791 1685236 out.go:203] 
	W1217 11:17:22.734495 1685236 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:22.734517 1685236 out.go:285] * 
	* 
	W1217 11:17:22.741409 1685236 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:22.743006 1685236 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
I1217 11:17:01.783831 1672941 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-54g7h" [0d30afbe-138e-4eec-b4f9-dc3c0a8c9362] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.00378707s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-767877 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-767877 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (272.824542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:17:07.851625 1683164 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:17:07.851730 1683164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:07.851740 1683164 out.go:374] Setting ErrFile to fd 2...
	I1217 11:17:07.851744 1683164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:17:07.851986 1683164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:17:07.852261 1683164 mustload.go:66] Loading cluster: addons-767877
	I1217 11:17:07.852728 1683164 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:07.852751 1683164 addons.go:622] checking whether the cluster is paused
	I1217 11:17:07.852872 1683164 config.go:182] Loaded profile config "addons-767877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:17:07.852886 1683164 host.go:66] Checking if "addons-767877" exists ...
	I1217 11:17:07.853374 1683164 cli_runner.go:164] Run: docker container inspect addons-767877 --format={{.State.Status}}
	I1217 11:17:07.875839 1683164 ssh_runner.go:195] Run: systemctl --version
	I1217 11:17:07.875915 1683164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-767877
	I1217 11:17:07.899248 1683164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/addons-767877/id_rsa Username:docker}
	I1217 11:17:07.994208 1683164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:17:07.994300 1683164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:17:08.026620 1683164 cri.go:89] found id: "960e339dfeb9d64e468c8fe2b978f8436aee03ca2d9d0c874e30e827782e148d"
	I1217 11:17:08.026663 1683164 cri.go:89] found id: "45dced8160416d4ca70a6263aecc4d724fbf9baa1ce8c98df9846ef096a1ba14"
	I1217 11:17:08.026670 1683164 cri.go:89] found id: "fa2ebcf83b879c99aaef8290c5bc9803b14e4f87c3da8d92a503cf7f18b27574"
	I1217 11:17:08.026674 1683164 cri.go:89] found id: "49ba8a4cf9b168810c600bccccd1378758bfacd3d59723fa0e1f8a0917d385be"
	I1217 11:17:08.026678 1683164 cri.go:89] found id: "4b8f30633c332cd97c41916fa0aede5eddc55f948e81f44c3cf45d80c38b77ce"
	I1217 11:17:08.026690 1683164 cri.go:89] found id: "29bb23388cfae8291c7d98bb8621fcc522d4e3048463079dcc94f9d8eb258c11"
	I1217 11:17:08.026695 1683164 cri.go:89] found id: "2019334dda3cba06cef0fc56af0ab4c2be58b4559cf1e5f7d885fce42838de88"
	I1217 11:17:08.026699 1683164 cri.go:89] found id: "a039ab85e94e7d5a609f5a39038a6c39fa0b5d5d0ff20330537951122e65a1bf"
	I1217 11:17:08.026703 1683164 cri.go:89] found id: "743ec64dbbba032c0b152b016c4d16a132058821c6656d9bf4b885a4538de535"
	I1217 11:17:08.026717 1683164 cri.go:89] found id: "27d01bff29030e3e440844235864402767222a18a4b2589fd98609b44b324e3e"
	I1217 11:17:08.026722 1683164 cri.go:89] found id: "85d34444a3d52be9cf958417d7f3f1c2a118f53282b3cb16b1e4262f901c260c"
	I1217 11:17:08.026727 1683164 cri.go:89] found id: "b486bd1049fb4401a8ec95e24a22c3d1c047445831a1c724ccf4c4878a5c0be6"
	I1217 11:17:08.026732 1683164 cri.go:89] found id: "7894f028137e733fbc2b2f24e305ddb8b05a29c2fd84eda5ef7f70a0271c0a20"
	I1217 11:17:08.026737 1683164 cri.go:89] found id: "a9e2a2f02ae680343eac26c3a1f3539df911073d14d8bff529affb8fb9ad6104"
	I1217 11:17:08.026742 1683164 cri.go:89] found id: "99be406f81626515d24df8084578f0d259b4644cbdaf18633e76345d3cab44a0"
	I1217 11:17:08.026759 1683164 cri.go:89] found id: "3d6dc27d27364ebae2ca257f1718c8d8e6da72453f6c188d2ad54e8494ea2deb"
	I1217 11:17:08.026767 1683164 cri.go:89] found id: "d7add53e16ff42454b0a5dcec637c06163c524dfef872b4aa863e7b1c088a370"
	I1217 11:17:08.026773 1683164 cri.go:89] found id: "710c232068b61c2039787140908a32c36ac4c0cbbe62af12dcf33c141a3cfaaa"
	I1217 11:17:08.026778 1683164 cri.go:89] found id: "27822a03994e613b296ab393a9bf8bc02cac84b6e93a09b8263dfa9312e85766"
	I1217 11:17:08.026783 1683164 cri.go:89] found id: "e8a13ad739d84a8f1f25c068538a0e37aa029c7fa101ce77d945a64b10719989"
	I1217 11:17:08.026792 1683164 cri.go:89] found id: "9f8c99a2db49bde7899d175bed5443fc090916905dc414013d1523a6e955d51a"
	I1217 11:17:08.026801 1683164 cri.go:89] found id: "d01e74fe7a95cd11dd36b5e89a5b24f9a5c488c6f33e8a27bd910d47f7e296dc"
	I1217 11:17:08.026806 1683164 cri.go:89] found id: "f965996f6131fb630c4c351e6437cec9d7a6d749bad78ad5849c3908f7344e86"
	I1217 11:17:08.026811 1683164 cri.go:89] found id: "59bd12719079ac68b4c06902980db89bf93e26eae6cca50ae9de5a7366a43875"
	I1217 11:17:08.026816 1683164 cri.go:89] found id: ""
	I1217 11:17:08.026884 1683164 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:17:08.043752 1683164 out.go:203] 
	W1217 11:17:08.045727 1683164 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:17:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:17:08.045747 1683164 out.go:285] * 
	* 
	W1217 11:17:08.051665 1683164 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:17:08.053306 1683164 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-767877 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image load --daemon kicbase/echo-server:functional-212713 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-212713 image load --daemon kicbase/echo-server:functional-212713 --alsologtostderr: (2.493484632s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-212713 image ls: (2.280038171s)
functional_test.go:461: expected "kicbase/echo-server:functional-212713" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.31s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-843742 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-843742 --output=json --user=testUser: exit status 80 (2.311545863s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d343173-e371-4c93-b14c-0be5a1d33a2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-843742 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0cc7eee0-92cd-4337-b8e9-62c4c973b894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T11:35:00Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"cfd0eaf4-c07c-45b1-9d9a-68a018281a57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-843742 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.31s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.02s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-843742 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-843742 --output=json --user=testUser: exit status 80 (2.019265205s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ef98eb5-bb1b-43f6-8ca0-eac4fd7353d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-843742 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"4635d8d6-9f42-4de8-b271-a3227be1514d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T11:35:02Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"8c0dbfdb-c783-4dda-89ae-6e74ead2a8f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-843742 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.02s)

                                                
                                    
x
+
TestPause/serial/Pause (6.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-016656 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-016656 --alsologtostderr -v=5: exit status 80 (2.491691675s)

                                                
                                                
-- stdout --
	* Pausing node pause-016656 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:48:58.248902 1881925 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:48:58.249006 1881925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:48:58.249017 1881925 out.go:374] Setting ErrFile to fd 2...
	I1217 11:48:58.249022 1881925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:48:58.249227 1881925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:48:58.249484 1881925 out.go:368] Setting JSON to false
	I1217 11:48:58.249508 1881925 mustload.go:66] Loading cluster: pause-016656
	I1217 11:48:58.249950 1881925 config.go:182] Loaded profile config "pause-016656": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:48:58.250385 1881925 cli_runner.go:164] Run: docker container inspect pause-016656 --format={{.State.Status}}
	I1217 11:48:58.271180 1881925 host.go:66] Checking if "pause-016656" exists ...
	I1217 11:48:58.271526 1881925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:48:58.369960 1881925 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-17 11:48:58.357635443 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:48:58.370858 1881925 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-016656 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 11:48:58.373017 1881925 out.go:179] * Pausing node pause-016656 ... 
	I1217 11:48:58.374393 1881925 host.go:66] Checking if "pause-016656" exists ...
	I1217 11:48:58.374772 1881925 ssh_runner.go:195] Run: systemctl --version
	I1217 11:48:58.374818 1881925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-016656
	I1217 11:48:58.399907 1881925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/pause-016656/id_rsa Username:docker}
	I1217 11:48:58.499284 1881925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:48:58.513854 1881925 pause.go:52] kubelet running: true
	I1217 11:48:58.513937 1881925 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:48:58.673829 1881925 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:48:58.673913 1881925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:48:58.748338 1881925 cri.go:89] found id: "738b93ea18f545c95fceec6b5cc86d44e7222917e68ec3f742e523eda4b33f63"
	I1217 11:48:58.748370 1881925 cri.go:89] found id: "849a0e31e1a02f7d874df40c5247298424b7cc7ecc4c6af63bddcf02ed5b3bf5"
	I1217 11:48:58.748376 1881925 cri.go:89] found id: "6783c23b649da94f218090448e20634bd08ed8613ee0fc4970baf0710d1cb37a"
	I1217 11:48:58.748381 1881925 cri.go:89] found id: "f5667db940ec1747e843db665e3b2bb01474456533fe3fcb8120230f3d8fbea4"
	I1217 11:48:58.748385 1881925 cri.go:89] found id: "af6a64f34f500a3de7067fe3192f7f7f925bc08286bfed53e0f722f0b96a037c"
	I1217 11:48:58.748393 1881925 cri.go:89] found id: "0693ab25679d5756c48ca38263cfd8995d66c84715097772ae29f18b72bdf1a7"
	I1217 11:48:58.748400 1881925 cri.go:89] found id: "2998ec06acced2c8225eaab508140d0b2fb0b8b67c04c2f395375c463dfdf085"
	I1217 11:48:58.748404 1881925 cri.go:89] found id: ""
	I1217 11:48:58.748453 1881925 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:48:58.761824 1881925 retry.go:31] will retry after 280.073121ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:48:58Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:48:59.042751 1881925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:48:59.063398 1881925 pause.go:52] kubelet running: false
	I1217 11:48:59.063447 1881925 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:48:59.175040 1881925 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:48:59.175129 1881925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:48:59.244904 1881925 cri.go:89] found id: "738b93ea18f545c95fceec6b5cc86d44e7222917e68ec3f742e523eda4b33f63"
	I1217 11:48:59.244931 1881925 cri.go:89] found id: "849a0e31e1a02f7d874df40c5247298424b7cc7ecc4c6af63bddcf02ed5b3bf5"
	I1217 11:48:59.244935 1881925 cri.go:89] found id: "6783c23b649da94f218090448e20634bd08ed8613ee0fc4970baf0710d1cb37a"
	I1217 11:48:59.244939 1881925 cri.go:89] found id: "f5667db940ec1747e843db665e3b2bb01474456533fe3fcb8120230f3d8fbea4"
	I1217 11:48:59.244942 1881925 cri.go:89] found id: "af6a64f34f500a3de7067fe3192f7f7f925bc08286bfed53e0f722f0b96a037c"
	I1217 11:48:59.244945 1881925 cri.go:89] found id: "0693ab25679d5756c48ca38263cfd8995d66c84715097772ae29f18b72bdf1a7"
	I1217 11:48:59.244947 1881925 cri.go:89] found id: "2998ec06acced2c8225eaab508140d0b2fb0b8b67c04c2f395375c463dfdf085"
	I1217 11:48:59.244950 1881925 cri.go:89] found id: ""
	I1217 11:48:59.244994 1881925 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:48:59.257336 1881925 retry.go:31] will retry after 308.232563ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:48:59Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:48:59.565807 1881925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:48:59.583167 1881925 pause.go:52] kubelet running: false
	I1217 11:48:59.583233 1881925 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:48:59.728673 1881925 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:48:59.728777 1881925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:48:59.820342 1881925 cri.go:89] found id: "738b93ea18f545c95fceec6b5cc86d44e7222917e68ec3f742e523eda4b33f63"
	I1217 11:48:59.820376 1881925 cri.go:89] found id: "849a0e31e1a02f7d874df40c5247298424b7cc7ecc4c6af63bddcf02ed5b3bf5"
	I1217 11:48:59.820403 1881925 cri.go:89] found id: "6783c23b649da94f218090448e20634bd08ed8613ee0fc4970baf0710d1cb37a"
	I1217 11:48:59.820409 1881925 cri.go:89] found id: "f5667db940ec1747e843db665e3b2bb01474456533fe3fcb8120230f3d8fbea4"
	I1217 11:48:59.820413 1881925 cri.go:89] found id: "af6a64f34f500a3de7067fe3192f7f7f925bc08286bfed53e0f722f0b96a037c"
	I1217 11:48:59.820418 1881925 cri.go:89] found id: "0693ab25679d5756c48ca38263cfd8995d66c84715097772ae29f18b72bdf1a7"
	I1217 11:48:59.820423 1881925 cri.go:89] found id: "2998ec06acced2c8225eaab508140d0b2fb0b8b67c04c2f395375c463dfdf085"
	I1217 11:48:59.820428 1881925 cri.go:89] found id: ""
	I1217 11:48:59.820474 1881925 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:48:59.836373 1881925 retry.go:31] will retry after 594.203149ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:48:59Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:49:00.431724 1881925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:49:00.445621 1881925 pause.go:52] kubelet running: false
	I1217 11:49:00.445695 1881925 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:49:00.571712 1881925 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:49:00.571788 1881925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:49:00.642664 1881925 cri.go:89] found id: "738b93ea18f545c95fceec6b5cc86d44e7222917e68ec3f742e523eda4b33f63"
	I1217 11:49:00.642688 1881925 cri.go:89] found id: "849a0e31e1a02f7d874df40c5247298424b7cc7ecc4c6af63bddcf02ed5b3bf5"
	I1217 11:49:00.642693 1881925 cri.go:89] found id: "6783c23b649da94f218090448e20634bd08ed8613ee0fc4970baf0710d1cb37a"
	I1217 11:49:00.642696 1881925 cri.go:89] found id: "f5667db940ec1747e843db665e3b2bb01474456533fe3fcb8120230f3d8fbea4"
	I1217 11:49:00.642699 1881925 cri.go:89] found id: "af6a64f34f500a3de7067fe3192f7f7f925bc08286bfed53e0f722f0b96a037c"
	I1217 11:49:00.642702 1881925 cri.go:89] found id: "0693ab25679d5756c48ca38263cfd8995d66c84715097772ae29f18b72bdf1a7"
	I1217 11:49:00.642704 1881925 cri.go:89] found id: "2998ec06acced2c8225eaab508140d0b2fb0b8b67c04c2f395375c463dfdf085"
	I1217 11:49:00.642707 1881925 cri.go:89] found id: ""
	I1217 11:49:00.642743 1881925 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:49:00.656966 1881925 out.go:203] 
	W1217 11:49:00.658343 1881925 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:49:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:49:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:49:00.658366 1881925 out.go:285] * 
	* 
	W1217 11:49:00.664971 1881925 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:49:00.666423 1881925 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-016656 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-016656
helpers_test.go:244: (dbg) docker inspect pause-016656:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2",
	        "Created": "2025-12-17T11:47:54.47179057Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1862828,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:47:54.520733984Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2/hosts",
	        "LogPath": "/var/lib/docker/containers/dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2/dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2-json.log",
	        "Name": "/pause-016656",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-016656:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-016656",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2",
	                "LowerDir": "/var/lib/docker/overlay2/bd3d428ba0e097b6fc22a064564b0abb746a6882e13fa806c206b37b82fd8b9f-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd3d428ba0e097b6fc22a064564b0abb746a6882e13fa806c206b37b82fd8b9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd3d428ba0e097b6fc22a064564b0abb746a6882e13fa806c206b37b82fd8b9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd3d428ba0e097b6fc22a064564b0abb746a6882e13fa806c206b37b82fd8b9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-016656",
	                "Source": "/var/lib/docker/volumes/pause-016656/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-016656",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-016656",
	                "name.minikube.sigs.k8s.io": "pause-016656",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "72d78eaf97a5c8d420468ed41662c91befbda6d6c70147d3cf23fb73511c9069",
	            "SandboxKey": "/var/run/docker/netns/72d78eaf97a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34526"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34527"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34528"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34529"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-016656": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "15db7586bad7d8c2bedb9753fe4609391a6a952673fbd0485dda9dd8c72dc243",
	                    "EndpointID": "cf7a72fd2f4607fdf3731a40db3449c5d9c1bffc136d6c1840dc7773cbc67d7f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "6e:a7:2e:3d:30:fa",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-016656",
	                        "dd404e5cbe7a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-016656 -n pause-016656
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-016656 -n pause-016656: exit status 2 (386.364252ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-016656 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-016656 logs -n 25: (1.246417793s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-816702 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │                     │
	│ stop    │ -p scheduled-stop-816702 --cancel-scheduled                                                                                              │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:46 UTC │
	│ stop    │ -p scheduled-stop-816702 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │                     │
	│ stop    │ -p scheduled-stop-816702 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │                     │
	│ stop    │ -p scheduled-stop-816702 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:47 UTC │
	│ delete  │ -p scheduled-stop-816702                                                                                                                 │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ start   │ -p insufficient-storage-006783 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-006783 │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │                     │
	│ delete  │ -p insufficient-storage-006783                                                                                                           │ insufficient-storage-006783 │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ start   │ -p offline-crio-990385 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-990385         │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p NoKubernetes-057260 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │                     │
	│ start   │ -p pause-016656 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-016656                │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p force-systemd-env-154933 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-154933    │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p NoKubernetes-057260 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p NoKubernetes-057260 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p force-systemd-env-154933                                                                                                              │ force-systemd-env-154933    │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p missing-upgrade-837067 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-837067      │ jenkins │ v1.35.0 │ 17 Dec 25 11:48 UTC │                     │
	│ delete  │ -p offline-crio-990385                                                                                                                   │ offline-crio-990385         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p NoKubernetes-057260                                                                                                                   │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-556754   │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │                     │
	│ start   │ -p pause-016656 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-016656                │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p NoKubernetes-057260 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ ssh     │ -p NoKubernetes-057260 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │                     │
	│ stop    │ -p NoKubernetes-057260                                                                                                                   │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p NoKubernetes-057260 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │                     │
	│ pause   │ -p pause-016656 --alsologtostderr -v=5                                                                                                   │ pause-016656                │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:48:55
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:48:55.293202 1880967 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:48:55.293307 1880967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:48:55.293311 1880967 out.go:374] Setting ErrFile to fd 2...
	I1217 11:48:55.293315 1880967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:48:55.293654 1880967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:48:55.294211 1880967 out.go:368] Setting JSON to false
	I1217 11:48:55.295557 1880967 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":19880,"bootTime":1765952255,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:48:55.295615 1880967 start.go:143] virtualization: kvm guest
	I1217 11:48:55.297678 1880967 out.go:179] * [NoKubernetes-057260] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:48:55.299183 1880967 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:48:55.299228 1880967 notify.go:221] Checking for updates...
	I1217 11:48:55.301758 1880967 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:48:55.302880 1880967 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:48:55.303992 1880967 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:48:55.305041 1880967 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:48:55.306133 1880967 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:48:55.307909 1880967 config.go:182] Loaded profile config "NoKubernetes-057260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1217 11:48:55.308650 1880967 start.go:1806] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1217 11:48:55.308680 1880967 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:48:55.338063 1880967 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:48:55.338180 1880967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:48:55.401421 1880967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:48:55.390106868 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:48:55.401594 1880967 docker.go:319] overlay module found
	I1217 11:48:55.405643 1880967 out.go:179] * Using the docker driver based on existing profile
	I1217 11:48:55.407041 1880967 start.go:309] selected driver: docker
	I1217 11:48:55.407050 1880967 start.go:927] validating driver "docker" against &{Name:NoKubernetes-057260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-057260 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:48:55.407140 1880967 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:48:55.407228 1880967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:48:55.471080 1880967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:48:55.46025678 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:48:55.471751 1880967 cni.go:84] Creating CNI manager for ""
	I1217 11:48:55.471806 1880967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:48:55.471842 1880967 start.go:353] cluster config:
	{Name:NoKubernetes-057260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-057260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:48:55.474628 1880967 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-057260
	I1217 11:48:55.475998 1880967 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:48:55.477438 1880967 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:48:51.637962 1873296 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-556754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:48:51.658752 1873296 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:48:51.663479 1873296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:48:51.677468 1873296 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-556754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-556754 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:48:51.677649 1873296 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 11:48:51.677728 1873296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:48:51.715945 1873296 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:48:51.715967 1873296 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:48:51.716013 1873296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:48:51.745835 1873296 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:48:51.745857 1873296 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:48:51.745864 1873296 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1217 11:48:51.745961 1873296 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-556754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-556754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:48:51.746061 1873296 ssh_runner.go:195] Run: crio config
	I1217 11:48:51.807710 1873296 cni.go:84] Creating CNI manager for ""
	I1217 11:48:51.807744 1873296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:48:51.807769 1873296 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:48:51.807800 1873296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-556754 NodeName:kubernetes-upgrade-556754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:48:51.807993 1873296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-556754"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:48:51.808071 1873296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1217 11:48:51.818206 1873296 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:48:51.818286 1873296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:48:51.828723 1873296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1217 11:48:51.846142 1873296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:48:51.868308 1873296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1217 11:48:51.885174 1873296 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:48:51.889997 1873296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:48:51.901495 1873296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:48:52.004502 1873296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:48:52.030133 1873296 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754 for IP: 192.168.76.2
	I1217 11:48:52.030168 1873296 certs.go:195] generating shared ca certs ...
	I1217 11:48:52.030190 1873296 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.030392 1873296 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:48:52.030462 1873296 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:48:52.030483 1873296 certs.go:257] generating profile certs ...
	I1217 11:48:52.030607 1873296 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.key
	I1217 11:48:52.030637 1873296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.crt with IP's: []
	I1217 11:48:52.147023 1873296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.crt ...
	I1217 11:48:52.147058 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.crt: {Name:mkc959c6f5da50a9e6875645cd8dfd3fd1ed0d7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.147256 1873296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.key ...
	I1217 11:48:52.147280 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.key: {Name:mk72dc23d8193aaaefc7c69187fb3d99e32e8cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.147441 1873296 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key.105d7ac1
	I1217 11:48:52.147467 1873296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt.105d7ac1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 11:48:52.264005 1873296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt.105d7ac1 ...
	I1217 11:48:52.264038 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt.105d7ac1: {Name:mkb6e33c29d04fa9f86243804f7be2b28f1cf3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.264237 1873296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key.105d7ac1 ...
	I1217 11:48:52.264260 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key.105d7ac1: {Name:mkf836cec3f2c4f6cf8a71e1931d736f7c7e6510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.264394 1873296 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt.105d7ac1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt
	I1217 11:48:52.264497 1873296 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key.105d7ac1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key
	I1217 11:48:52.264604 1873296 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.key
	I1217 11:48:52.264631 1873296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.crt with IP's: []
	I1217 11:48:52.482676 1873296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.crt ...
	I1217 11:48:52.482710 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.crt: {Name:mk97e5c46f9c1aead56b893cf3e50b910c7e092e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.482918 1873296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.key ...
	I1217 11:48:52.482938 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.key: {Name:mk46857dd6044bf3327dfcef7008dd29b5c8bbb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.483199 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:48:52.483259 1873296 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:48:52.483273 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:48:52.483311 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:48:52.483350 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:48:52.483397 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:48:52.483474 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:48:52.484076 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:48:52.505793 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:48:52.526242 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:48:52.546247 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:48:52.565645 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 11:48:52.585887 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:48:52.606995 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:48:52.631800 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:48:52.654322 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:48:52.673795 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:48:52.693905 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:48:52.712949 1873296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:48:52.727013 1873296 ssh_runner.go:195] Run: openssl version
	I1217 11:48:52.733678 1873296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:52.742404 1873296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:48:52.751135 1873296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:52.755762 1873296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:52.755818 1873296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:52.797355 1873296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:48:52.807930 1873296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:48:52.818678 1873296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:48:52.827936 1873296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:48:52.833679 1873296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:48:52.833750 1873296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:48:52.871640 1873296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:48:52.880296 1873296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:48:52.888390 1873296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:48:52.896729 1873296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:48:52.901034 1873296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:48:52.901112 1873296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:48:52.938272 1873296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:48:52.947111 1873296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:48:52.951273 1873296 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:48:52.951345 1873296 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-556754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-556754 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:48:52.951446 1873296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:48:52.951508 1873296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:48:52.985984 1873296 cri.go:89] found id: ""
	I1217 11:48:52.986062 1873296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:48:52.995402 1873296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:48:53.004073 1873296 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:48:53.004142 1873296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:48:53.013050 1873296 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:48:53.013072 1873296 kubeadm.go:158] found existing configuration files:
	
	I1217 11:48:53.013112 1873296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:48:53.022036 1873296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:48:53.022128 1873296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:48:53.030683 1873296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:48:53.039388 1873296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:48:53.039450 1873296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:48:53.047717 1873296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:48:53.056289 1873296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:48:53.056346 1873296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:48:53.064969 1873296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:48:53.073439 1873296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:48:53.073500 1873296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:48:53.081875 1873296 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:48:53.178047 1873296 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:48:53.252359 1873296 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:48:55.478904 1880967 preload.go:188] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1217 11:48:55.478957 1880967 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:48:55.503080 1880967 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:48:55.503094 1880967 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	W1217 11:48:55.856615 1880967 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1217 11:48:56.175703 1880967 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1217 11:48:56.175857 1880967 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/NoKubernetes-057260/config.json ...
	I1217 11:48:56.176103 1880967 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:48:56.176134 1880967 start.go:360] acquireMachinesLock for NoKubernetes-057260: {Name:mkd24e14fee7a10014a18938138b94303e4302b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:48:56.176197 1880967 start.go:364] duration metric: took 44.847µs to acquireMachinesLock for "NoKubernetes-057260"
	I1217 11:48:56.176215 1880967 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:48:56.176219 1880967 fix.go:54] fixHost starting: 
	I1217 11:48:56.176436 1880967 cli_runner.go:164] Run: docker container inspect NoKubernetes-057260 --format={{.State.Status}}
	I1217 11:48:56.197703 1880967 fix.go:112] recreateIfNeeded on NoKubernetes-057260: state=Stopped err=<nil>
	W1217 11:48:56.197730 1880967 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 11:48:55.274450 1872100 cli_runner.go:164] Run: docker network inspect missing-upgrade-837067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:48:55.294649 1872100 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 11:48:55.298974 1872100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:48:55.312215 1872100 kubeadm.go:883] updating cluster {Name:missing-upgrade-837067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-837067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:48:55.312392 1872100 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1217 11:48:55.312456 1872100 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:48:55.411376 1872100 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:48:55.411400 1872100 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:48:55.411445 1872100 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:48:55.459197 1872100 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:48:55.459212 1872100 cache_images.go:84] Images are preloaded, skipping loading
	I1217 11:48:55.459219 1872100 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.32.0 crio true true} ...
	I1217 11:48:55.459404 1872100 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=missing-upgrade-837067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-837067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:48:55.459511 1872100 ssh_runner.go:195] Run: crio config
	I1217 11:48:55.515545 1872100 cni.go:84] Creating CNI manager for ""
	I1217 11:48:55.515561 1872100 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:48:55.515572 1872100 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1217 11:48:55.515599 1872100 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-837067 NodeName:missing-upgrade-837067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:48:55.515721 1872100 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-837067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:48:55.515782 1872100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1217 11:48:55.526306 1872100 binaries.go:44] Found k8s binaries, skipping transfer
	I1217 11:48:55.526382 1872100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:48:55.537220 1872100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1217 11:48:55.557563 1872100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:48:55.580312 1872100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1217 11:48:55.600783 1872100 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:48:55.604971 1872100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:48:55.617565 1872100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:48:55.689109 1872100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:48:55.713251 1872100 certs.go:68] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067 for IP: 192.168.103.2
	I1217 11:48:55.713264 1872100 certs.go:194] generating shared ca certs ...
	I1217 11:48:55.713281 1872100 certs.go:226] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:55.713454 1872100 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:48:55.713496 1872100 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:48:55.713502 1872100 certs.go:256] generating profile certs ...
	I1217 11:48:55.713588 1872100 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.key
	I1217 11:48:55.713603 1872100 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.crt with IP's: []
	I1217 11:48:55.768099 1872100 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.crt ...
	I1217 11:48:55.768117 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.crt: {Name:mkcee79ac49c17b92223bf4743ec1fb0439a1ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:55.768304 1872100 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.key ...
	I1217 11:48:55.768315 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.key: {Name:mk086ba6497b1537290452fba1e98594d4a81406 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:55.768437 1872100 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key.2fdccbfd
	I1217 11:48:55.768450 1872100 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt.2fdccbfd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 11:48:56.120766 1872100 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt.2fdccbfd ...
	I1217 11:48:56.120785 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt.2fdccbfd: {Name:mk760036d2ff1a511ddb7329819c412b7984f65e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:56.120957 1872100 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key.2fdccbfd ...
	I1217 11:48:56.120966 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key.2fdccbfd: {Name:mk56e1457c3aded83fb895318296ad709a18674d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:56.121039 1872100 certs.go:381] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt.2fdccbfd -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt
	I1217 11:48:56.121114 1872100 certs.go:385] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key.2fdccbfd -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key
	I1217 11:48:56.121163 1872100 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.key
	I1217 11:48:56.121180 1872100 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.crt with IP's: []
	I1217 11:48:56.272503 1872100 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.crt ...
	I1217 11:48:56.272529 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.crt: {Name:mk967b58718d8a7bde433ec21d6553c1ba6ff0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:56.272732 1872100 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.key ...
	I1217 11:48:56.272748 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.key: {Name:mk7870b6035b9fdee1c01962106ebc42fc8e4d93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:56.273028 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:48:56.273086 1872100 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:48:56.273100 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:48:56.273126 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:48:56.273150 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:48:56.273174 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:48:56.273231 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:48:56.274225 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:48:56.304509 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:48:56.343224 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:48:56.371722 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:48:56.397902 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 11:48:56.426326 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 11:48:56.457096 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:48:56.491518 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 11:48:56.524800 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:48:56.561392 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:48:56.590585 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:48:56.619283 1872100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:48:56.640956 1872100 ssh_runner.go:195] Run: openssl version
	I1217 11:48:56.647048 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16729412.pem && ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem"
	I1217 11:48:56.657900 1872100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:48:56.661719 1872100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:48:56.661763 1872100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:48:56.669464 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16729412.pem /etc/ssl/certs/3ec20f2e.0"
	I1217 11:48:56.682527 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1217 11:48:56.698968 1872100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:56.704862 1872100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:56.704921 1872100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:56.713429 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1217 11:48:56.728761 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1672941.pem && ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem"
	I1217 11:48:56.744725 1872100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:48:56.749717 1872100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:48:56.749776 1872100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:48:56.756950 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1672941.pem /etc/ssl/certs/51391683.0"
	I1217 11:48:56.767959 1872100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:48:56.772766 1872100 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:48:56.772837 1872100 kubeadm.go:392] StartCluster: {Name:missing-upgrade-837067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-837067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:48:56.772949 1872100 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:48:56.773008 1872100 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:48:56.820919 1872100 cri.go:89] found id: ""
	I1217 11:48:56.820982 1872100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:48:56.834299 1872100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:48:56.845344 1872100 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:48:56.845401 1872100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:48:56.856336 1872100 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:48:56.856382 1872100 kubeadm.go:157] found existing configuration files:
	
	I1217 11:48:56.856437 1872100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:48:56.866637 1872100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:48:56.866693 1872100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:48:56.876278 1872100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:48:56.887158 1872100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:48:56.887233 1872100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:48:56.896866 1872100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:48:56.906577 1872100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:48:56.906648 1872100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:48:56.915749 1872100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:48:56.925204 1872100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:48:56.925270 1872100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:48:56.935527 1872100 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:48:56.976919 1872100 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1217 11:48:56.976964 1872100 kubeadm.go:310] [preflight] Running pre-flight checks
	I1217 11:48:56.996735 1872100 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:48:56.996830 1872100 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:48:56.996886 1872100 kubeadm.go:310] OS: Linux
	I1217 11:48:56.996962 1872100 kubeadm.go:310] CGROUPS_CPU: enabled
	I1217 11:48:56.997043 1872100 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1217 11:48:56.997113 1872100 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1217 11:48:56.997174 1872100 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1217 11:48:56.997233 1872100 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1217 11:48:56.997306 1872100 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1217 11:48:56.997366 1872100 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1217 11:48:56.997423 1872100 kubeadm.go:310] CGROUPS_IO: enabled
	I1217 11:48:57.056092 1872100 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:48:57.056246 1872100 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:48:57.056374 1872100 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:48:57.063888 1872100 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1217 11:48:54.414905 1873860 node_ready.go:57] node "pause-016656" has "Ready":"False" status (will retry)
	I1217 11:48:56.414657 1873860 node_ready.go:49] node "pause-016656" is "Ready"
	I1217 11:48:56.414693 1873860 node_ready.go:38] duration metric: took 10.003916334s for node "pause-016656" to be "Ready" ...
	I1217 11:48:56.414712 1873860 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:48:56.414770 1873860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:48:56.428005 1873860 api_server.go:72] duration metric: took 10.149002131s to wait for apiserver process to appear ...
	I1217 11:48:56.428033 1873860 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:48:56.428058 1873860 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 11:48:56.432577 1873860 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 11:48:56.433706 1873860 api_server.go:141] control plane version: v1.34.3
	I1217 11:48:56.433740 1873860 api_server.go:131] duration metric: took 5.698165ms to wait for apiserver health ...
	I1217 11:48:56.433752 1873860 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:48:56.437506 1873860 system_pods.go:59] 7 kube-system pods found
	I1217 11:48:56.437613 1873860 system_pods.go:61] "coredns-66bc5c9577-xcwn4" [fd7eeebf-c0c0-4924-9c0f-c6270ee45be8] Running
	I1217 11:48:56.437631 1873860 system_pods.go:61] "etcd-pause-016656" [3f471de9-5bd5-4796-8782-4ae345738b9c] Running
	I1217 11:48:56.437638 1873860 system_pods.go:61] "kindnet-m9tqf" [51e023f8-bdf8-4fd1-8ad5-dc0d157fcf38] Running
	I1217 11:48:56.437643 1873860 system_pods.go:61] "kube-apiserver-pause-016656" [411e886f-e937-4300-b262-54bf7c427a81] Running
	I1217 11:48:56.437659 1873860 system_pods.go:61] "kube-controller-manager-pause-016656" [aac2771a-b74c-4734-8e4a-a54d34aca8b4] Running
	I1217 11:48:56.437683 1873860 system_pods.go:61] "kube-proxy-9gv76" [ebdee9ea-1b73-4865-a8b5-72f039f8bb34] Running
	I1217 11:48:56.437692 1873860 system_pods.go:61] "kube-scheduler-pause-016656" [68a7444d-6f18-4569-8d0b-7dabd2494695] Running
	I1217 11:48:56.437700 1873860 system_pods.go:74] duration metric: took 3.940536ms to wait for pod list to return data ...
	I1217 11:48:56.437710 1873860 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:48:56.440166 1873860 default_sa.go:45] found service account: "default"
	I1217 11:48:56.440187 1873860 default_sa.go:55] duration metric: took 2.466906ms for default service account to be created ...
	I1217 11:48:56.440197 1873860 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:48:56.443184 1873860 system_pods.go:86] 7 kube-system pods found
	I1217 11:48:56.443212 1873860 system_pods.go:89] "coredns-66bc5c9577-xcwn4" [fd7eeebf-c0c0-4924-9c0f-c6270ee45be8] Running
	I1217 11:48:56.443219 1873860 system_pods.go:89] "etcd-pause-016656" [3f471de9-5bd5-4796-8782-4ae345738b9c] Running
	I1217 11:48:56.443223 1873860 system_pods.go:89] "kindnet-m9tqf" [51e023f8-bdf8-4fd1-8ad5-dc0d157fcf38] Running
	I1217 11:48:56.443228 1873860 system_pods.go:89] "kube-apiserver-pause-016656" [411e886f-e937-4300-b262-54bf7c427a81] Running
	I1217 11:48:56.443233 1873860 system_pods.go:89] "kube-controller-manager-pause-016656" [aac2771a-b74c-4734-8e4a-a54d34aca8b4] Running
	I1217 11:48:56.443238 1873860 system_pods.go:89] "kube-proxy-9gv76" [ebdee9ea-1b73-4865-a8b5-72f039f8bb34] Running
	I1217 11:48:56.443243 1873860 system_pods.go:89] "kube-scheduler-pause-016656" [68a7444d-6f18-4569-8d0b-7dabd2494695] Running
	I1217 11:48:56.443252 1873860 system_pods.go:126] duration metric: took 3.048235ms to wait for k8s-apps to be running ...
	I1217 11:48:56.443264 1873860 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:48:56.443309 1873860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:48:56.469571 1873860 system_svc.go:56] duration metric: took 26.28868ms WaitForService to wait for kubelet
	I1217 11:48:56.469614 1873860 kubeadm.go:587] duration metric: took 10.190617868s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:48:56.469637 1873860 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:48:56.472975 1873860 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:48:56.473010 1873860 node_conditions.go:123] node cpu capacity is 8
	I1217 11:48:56.473039 1873860 node_conditions.go:105] duration metric: took 3.39449ms to run NodePressure ...
	I1217 11:48:56.473055 1873860 start.go:242] waiting for startup goroutines ...
	I1217 11:48:56.473076 1873860 start.go:247] waiting for cluster config update ...
	I1217 11:48:56.473086 1873860 start.go:256] writing updated cluster config ...
	I1217 11:48:56.473457 1873860 ssh_runner.go:195] Run: rm -f paused
	I1217 11:48:56.478494 1873860 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:48:56.479061 1873860 kapi.go:59] client config for pause-016656: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/pause-016656/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/pause-016656/client.key", CAFile:"/home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 11:48:56.482711 1873860 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xcwn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.489006 1873860 pod_ready.go:94] pod "coredns-66bc5c9577-xcwn4" is "Ready"
	I1217 11:48:56.489037 1873860 pod_ready.go:86] duration metric: took 6.300583ms for pod "coredns-66bc5c9577-xcwn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.491561 1873860 pod_ready.go:83] waiting for pod "etcd-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.497172 1873860 pod_ready.go:94] pod "etcd-pause-016656" is "Ready"
	I1217 11:48:56.497202 1873860 pod_ready.go:86] duration metric: took 5.614635ms for pod "etcd-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.499515 1873860 pod_ready.go:83] waiting for pod "kube-apiserver-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.504066 1873860 pod_ready.go:94] pod "kube-apiserver-pause-016656" is "Ready"
	I1217 11:48:56.504092 1873860 pod_ready.go:86] duration metric: took 4.530028ms for pod "kube-apiserver-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.506324 1873860 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.882937 1873860 pod_ready.go:94] pod "kube-controller-manager-pause-016656" is "Ready"
	I1217 11:48:56.882967 1873860 pod_ready.go:86] duration metric: took 376.622034ms for pod "kube-controller-manager-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:57.083154 1873860 pod_ready.go:83] waiting for pod "kube-proxy-9gv76" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:57.065749 1872100 out.go:235]   - Generating certificates and keys ...
	I1217 11:48:57.065844 1872100 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1217 11:48:57.065910 1872100 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1217 11:48:57.231593 1872100 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:48:57.295295 1872100 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:48:57.559278 1872100 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:48:57.483244 1873860 pod_ready.go:94] pod "kube-proxy-9gv76" is "Ready"
	I1217 11:48:57.483277 1873860 pod_ready.go:86] duration metric: took 400.093046ms for pod "kube-proxy-9gv76" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:57.683122 1873860 pod_ready.go:83] waiting for pod "kube-scheduler-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:58.082861 1873860 pod_ready.go:94] pod "kube-scheduler-pause-016656" is "Ready"
	I1217 11:48:58.082894 1873860 pod_ready.go:86] duration metric: took 399.744839ms for pod "kube-scheduler-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:58.082910 1873860 pod_ready.go:40] duration metric: took 1.604332629s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:48:58.134637 1873860 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:48:58.137086 1873860 out.go:179] * Done! kubectl is now configured to use "pause-016656" cluster and "default" namespace by default
	I1217 11:48:57.883956 1872100 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1217 11:48:57.957378 1872100 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1217 11:48:57.957500 1872100 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost missing-upgrade-837067] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 11:48:58.103357 1872100 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1217 11:48:58.103599 1872100 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost missing-upgrade-837067] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 11:48:58.246089 1872100 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:48:58.407262 1872100 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:48:58.562339 1872100 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1217 11:48:58.562685 1872100 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:48:58.808055 1872100 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:48:58.971099 1872100 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:48:59.319690 1872100 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:48:59.671158 1872100 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:48:59.847253 1872100 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:48:59.847966 1872100 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:48:59.852363 1872100 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:48:56.199764 1880967 out.go:252] * Restarting existing docker container for "NoKubernetes-057260" ...
	I1217 11:48:56.199937 1880967 cli_runner.go:164] Run: docker start NoKubernetes-057260
	I1217 11:48:56.471401 1880967 cli_runner.go:164] Run: docker container inspect NoKubernetes-057260 --format={{.State.Status}}
	I1217 11:48:56.495268 1880967 kic.go:430] container "NoKubernetes-057260" state is running.
	I1217 11:48:56.495765 1880967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-057260
	I1217 11:48:56.518492 1880967 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/NoKubernetes-057260/config.json ...
	I1217 11:48:56.518756 1880967 machine.go:94] provisionDockerMachine start ...
	I1217 11:48:56.518812 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:48:56.540933 1880967 main.go:143] libmachine: Using SSH client type: native
	I1217 11:48:56.541179 1880967 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34546 <nil> <nil>}
	I1217 11:48:56.541186 1880967 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:48:56.542018 1880967 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52294->127.0.0.1:34546: read: connection reset by peer
	I1217 11:48:59.687934 1880967 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-057260
	
	I1217 11:48:59.687956 1880967 ubuntu.go:182] provisioning hostname "NoKubernetes-057260"
	I1217 11:48:59.688031 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:48:59.713437 1880967 main.go:143] libmachine: Using SSH client type: native
	I1217 11:48:59.713824 1880967 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34546 <nil> <nil>}
	I1217 11:48:59.713835 1880967 main.go:143] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-057260 && echo "NoKubernetes-057260" | sudo tee /etc/hostname
	I1217 11:48:59.871427 1880967 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-057260
	
	I1217 11:48:59.871508 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:48:59.894905 1880967 main.go:143] libmachine: Using SSH client type: native
	I1217 11:48:59.895376 1880967 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34546 <nil> <nil>}
	I1217 11:48:59.895407 1880967 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-057260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-057260/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-057260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:49:00.033746 1880967 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:49:00.033767 1880967 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:49:00.033792 1880967 ubuntu.go:190] setting up certificates
	I1217 11:49:00.033804 1880967 provision.go:84] configureAuth start
	I1217 11:49:00.033886 1880967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-057260
	I1217 11:49:00.055027 1880967 provision.go:143] copyHostCerts
	I1217 11:49:00.055096 1880967 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:49:00.055107 1880967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:49:00.055165 1880967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:49:00.055285 1880967 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:49:00.055291 1880967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:49:00.055335 1880967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:49:00.055444 1880967 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:49:00.055451 1880967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:49:00.055490 1880967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:49:00.055640 1880967 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-057260 san=[127.0.0.1 192.168.85.2 NoKubernetes-057260 localhost minikube]
	I1217 11:49:00.101381 1880967 provision.go:177] copyRemoteCerts
	I1217 11:49:00.101429 1880967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:49:00.101471 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:49:00.119696 1880967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34546 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/NoKubernetes-057260/id_rsa Username:docker}
	I1217 11:49:00.214511 1880967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:49:00.234098 1880967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1217 11:49:00.252391 1880967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:49:00.270148 1880967 provision.go:87] duration metric: took 236.329499ms to configureAuth
	I1217 11:49:00.270171 1880967 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:49:00.270401 1880967 config.go:182] Loaded profile config "NoKubernetes-057260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1217 11:49:00.270546 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:49:00.289670 1880967 main.go:143] libmachine: Using SSH client type: native
	I1217 11:49:00.289999 1880967 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34546 <nil> <nil>}
	I1217 11:49:00.290017 1880967 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.906280522Z" level=info msg="RDT not available in the host system"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.906298257Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.907364927Z" level=info msg="Conmon does support the --sync option"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.907393198Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.90742119Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.908252768Z" level=info msg="Conmon does support the --sync option"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.908276981Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.912918052Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.912946172Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.914014004Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.914572867Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.91464142Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.010756075Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-xcwn4 Namespace:kube-system ID:c613cd5a14f9e6c991632c731eb15db0bd1f8a28c52ff6822d5584af33722bd2 UID:fd7eeebf-c0c0-4924-9c0f-c6270ee45be8 NetNS:/var/run/netns/b95ef1ca-b11f-4ac3-bd94-4261b419a7a8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a5a8}] Aliases:map[]}"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.01095944Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-xcwn4 for CNI network kindnet (type=ptp)"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011351599Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011389305Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011444352Z" level=info msg="Create NRI interface"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011595754Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011613128Z" level=info msg="runtime interface created"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011628265Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011640478Z" level=info msg="runtime interface starting up..."
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011648368Z" level=info msg="starting plugins..."
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011667633Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.012069996Z" level=info msg="No systemd watchdog enabled"
	Dec 17 11:48:45 pause-016656 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	738b93ea18f54       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     26 seconds ago      Running             coredns                   0                   c613cd5a14f9e       coredns-66bc5c9577-xcwn4               kube-system
	849a0e31e1a02       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   38 seconds ago      Running             kindnet-cni               0                   0edf52eafd8d3       kindnet-m9tqf                          kube-system
	6783c23b649da       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     40 seconds ago      Running             kube-proxy                0                   c554107b48c99       kube-proxy-9gv76                       kube-system
	f5667db940ec1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     52 seconds ago      Running             kube-controller-manager   0                   b7852f3a396ca       kube-controller-manager-pause-016656   kube-system
	af6a64f34f500       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     52 seconds ago      Running             etcd                      0                   941d8f4a270ba       etcd-pause-016656                      kube-system
	0693ab25679d5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     52 seconds ago      Running             kube-apiserver            0                   380e2c7df2e68       kube-apiserver-pause-016656            kube-system
	2998ec06acced       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     52 seconds ago      Running             kube-scheduler            0                   ea75785a44953       kube-scheduler-pause-016656            kube-system
	
	
	==> coredns [738b93ea18f545c95fceec6b5cc86d44e7222917e68ec3f742e523eda4b33f63] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53425 - 46771 "HINFO IN 7261628847673218474.7996560943190306420. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03799586s
	
	
	==> describe nodes <==
	Name:               pause-016656
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-016656
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=pause-016656
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_48_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:48:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-016656
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:48:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:48:56 +0000   Wed, 17 Dec 2025 11:48:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:48:56 +0000   Wed, 17 Dec 2025 11:48:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:48:56 +0000   Wed, 17 Dec 2025 11:48:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:48:56 +0000   Wed, 17 Dec 2025 11:48:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-016656
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                714c64a1-e138-4bdb-af56-24cbdbc1efaa
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xcwn4                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     41s
	  kube-system                 etcd-pause-016656                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         46s
	  kube-system                 kindnet-m9tqf                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      41s
	  kube-system                 kube-apiserver-pause-016656             250m (3%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-pause-016656    200m (2%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-proxy-9gv76                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-scheduler-pause-016656             100m (1%)     0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 40s               kube-proxy       
	  Normal  Starting                 46s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s               kubelet          Node pause-016656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s               kubelet          Node pause-016656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s               kubelet          Node pause-016656 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s               node-controller  Node pause-016656 event: Registered Node pause-016656 in Controller
	  Normal  NodeNotReady             15s               kubelet          Node pause-016656 status is now: NodeNotReady
	  Normal  NodeReady                5s (x2 over 27s)  kubelet          Node pause-016656 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [af6a64f34f500a3de7067fe3192f7f7f925bc08286bfed53e0f722f0b96a037c] <==
	{"level":"warn","ts":"2025-12-17T11:48:11.889130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.896474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.903755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.910618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.917827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.934967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.941609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.948923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.999896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46172","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T11:48:34.304755Z","caller":"traceutil/trace.go:172","msg":"trace[150989265] linearizableReadLoop","detail":"{readStateIndex:400; appliedIndex:400; }","duration":"100.515878ms","start":"2025-12-17T11:48:34.204214Z","end":"2025-12-17T11:48:34.304730Z","steps":["trace[150989265] 'read index received'  (duration: 100.49466ms)","trace[150989265] 'applied index is now lower than readState.Index'  (duration: 18.872µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:48:34.309341Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.082435ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.94.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-12-17T11:48:34.309442Z","caller":"traceutil/trace.go:172","msg":"trace[465844435] range","detail":"{range_begin:/registry/masterleases/192.168.94.2; range_end:; response_count:1; response_revision:386; }","duration":"105.22609ms","start":"2025-12-17T11:48:34.204202Z","end":"2025-12-17T11:48:34.309428Z","steps":["trace[465844435] 'agreement among raft nodes before linearized reading'  (duration: 100.615545ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.309921Z","caller":"traceutil/trace.go:172","msg":"trace[546769526] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"166.590982ms","start":"2025-12-17T11:48:34.143310Z","end":"2025-12-17T11:48:34.309901Z","steps":["trace[546769526] 'process raft request'  (duration: 161.444675ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.472238Z","caller":"traceutil/trace.go:172","msg":"trace[1199757923] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:402; }","duration":"122.106605ms","start":"2025-12-17T11:48:34.350105Z","end":"2025-12-17T11:48:34.472211Z","steps":["trace[1199757923] 'read index received'  (duration: 122.087466ms)","trace[1199757923] 'applied index is now lower than readState.Index'  (duration: 17.958µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:48:34.478901Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.772032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:48:34.478966Z","caller":"traceutil/trace.go:172","msg":"trace[1905792807] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:387; }","duration":"128.861414ms","start":"2025-12-17T11:48:34.350092Z","end":"2025-12-17T11:48:34.478953Z","steps":["trace[1905792807] 'agreement among raft nodes before linearized reading'  (duration: 122.217473ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.478991Z","caller":"traceutil/trace.go:172","msg":"trace[1342931433] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"156.924302ms","start":"2025-12-17T11:48:34.322057Z","end":"2025-12-17T11:48:34.478982Z","steps":["trace[1342931433] 'process raft request'  (duration: 150.213792ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.480575Z","caller":"traceutil/trace.go:172","msg":"trace[1570635261] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"133.294999ms","start":"2025-12-17T11:48:34.347262Z","end":"2025-12-17T11:48:34.480557Z","steps":["trace[1570635261] 'process raft request'  (duration: 132.933202ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.615165Z","caller":"traceutil/trace.go:172","msg":"trace[957408065] linearizableReadLoop","detail":"{readStateIndex:404; appliedIndex:404; }","duration":"129.956406ms","start":"2025-12-17T11:48:34.485187Z","end":"2025-12-17T11:48:34.615143Z","steps":["trace[957408065] 'read index received'  (duration: 129.948086ms)","trace[957408065] 'applied index is now lower than readState.Index'  (duration: 7.138µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:48:34.691440Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.235366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-12-17T11:48:34.691498Z","caller":"traceutil/trace.go:172","msg":"trace[405206821] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:389; }","duration":"206.300639ms","start":"2025-12-17T11:48:34.485181Z","end":"2025-12-17T11:48:34.691482Z","steps":["trace[405206821] 'agreement among raft nodes before linearized reading'  (duration: 130.052557ms)","trace[405206821] 'range keys from in-memory index tree'  (duration: 76.135088ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:48:34.691620Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.343841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:48:34.691679Z","caller":"traceutil/trace.go:172","msg":"trace[583612712] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:390; }","duration":"206.413963ms","start":"2025-12-17T11:48:34.485251Z","end":"2025-12-17T11:48:34.691665Z","steps":["trace[583612712] 'agreement among raft nodes before linearized reading'  (duration: 206.319109ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.691730Z","caller":"traceutil/trace.go:172","msg":"trace[1939106350] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"207.975684ms","start":"2025-12-17T11:48:34.483734Z","end":"2025-12-17T11:48:34.691709Z","steps":["trace[1939106350] 'process raft request'  (duration: 131.449812ms)","trace[1939106350] 'compare'  (duration: 76.255847ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T11:48:34.881717Z","caller":"traceutil/trace.go:172","msg":"trace[768152905] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"154.819671ms","start":"2025-12-17T11:48:34.726876Z","end":"2025-12-17T11:48:34.881695Z","steps":["trace[768152905] 'process raft request'  (duration: 74.678185ms)","trace[768152905] 'compare'  (duration: 80.019133ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:49:02 up  5:31,  0 user,  load average: 4.55, 2.07, 1.47
	Linux pause-016656 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [849a0e31e1a02f7d874df40c5247298424b7cc7ecc4c6af63bddcf02ed5b3bf5] <==
	I1217 11:48:23.706397       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:48:23.706680       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 11:48:23.706813       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:48:23.706834       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:48:23.706854       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:48:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:48:23.911411       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:48:23.911471       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:48:23.911484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:48:24.004106       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:48:24.255919       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:48:24.255948       1 metrics.go:72] Registering metrics
	I1217 11:48:24.256020       1 controller.go:711] "Syncing nftables rules"
	I1217 11:48:33.918271       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:48:33.918340       1 main.go:301] handling current node
	I1217 11:48:44.003523       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:48:44.003587       1 main.go:301] handling current node
	I1217 11:48:53.917655       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:48:53.917706       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0693ab25679d5756c48ca38263cfd8995d66c84715097772ae29f18b72bdf1a7] <==
	I1217 11:48:12.504035       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1217 11:48:12.504122       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 11:48:12.504511       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 11:48:12.509203       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 11:48:12.509284       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:48:12.516027       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:48:12.516283       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 11:48:12.695335       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:48:13.405457       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 11:48:13.409846       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 11:48:13.409866       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:48:14.089442       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:48:14.132327       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:48:14.210035       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 11:48:14.216488       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1217 11:48:14.217642       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:48:14.221730       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:48:14.442460       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:48:15.360990       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:48:15.382798       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 11:48:15.418828       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 11:48:20.197002       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 11:48:20.294903       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:48:20.498063       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:48:20.512803       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [f5667db940ec1747e843db665e3b2bb01474456533fe3fcb8120230f3d8fbea4] <==
	I1217 11:48:19.443784       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 11:48:19.443865       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 11:48:19.443948       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 11:48:19.443949       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 11:48:19.443995       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 11:48:19.446268       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 11:48:19.447406       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 11:48:19.448594       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 11:48:19.448741       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:48:19.448759       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 11:48:19.448767       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 11:48:19.448921       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 11:48:19.449000       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 11:48:19.449061       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 11:48:19.449076       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 11:48:19.449083       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 11:48:19.456181       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 11:48:19.456349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:48:19.456858       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 11:48:19.457407       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-016656" podCIDRs=["10.244.0.0/24"]
	I1217 11:48:19.466483       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 11:48:19.467697       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:48:34.396318       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1217 11:48:49.399020       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 11:48:59.400143       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6783c23b649da94f218090448e20634bd08ed8613ee0fc4970baf0710d1cb37a] <==
	I1217 11:48:21.209864       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:48:21.287657       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:48:21.388040       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:48:21.388084       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 11:48:21.388172       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:48:21.408453       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:48:21.408504       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:48:21.414079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:48:21.414498       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:48:21.414585       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:48:21.416013       1 config.go:200] "Starting service config controller"
	I1217 11:48:21.416058       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:48:21.416037       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:48:21.416105       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:48:21.416158       1 config.go:309] "Starting node config controller"
	I1217 11:48:21.416175       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:48:21.416183       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:48:21.416193       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:48:21.416217       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:48:21.516331       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:48:21.516403       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 11:48:21.516452       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2998ec06acced2c8225eaab508140d0b2fb0b8b67c04c2f395375c463dfdf085] <==
	E1217 11:48:12.466993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:48:12.467023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:48:12.467115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 11:48:12.467217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 11:48:12.467285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 11:48:12.467566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:48:12.467298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:48:12.467351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 11:48:12.467653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 11:48:12.468182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 11:48:12.468573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 11:48:13.274781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 11:48:13.350344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 11:48:13.551073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 11:48:13.572525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 11:48:13.586974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:48:13.595178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 11:48:13.605525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 11:48:13.649828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 11:48:13.663120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 11:48:13.729015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 11:48:13.774420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:48:13.813293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:48:14.020829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1217 11:48:16.263152       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:48:39 pause-016656 kubelet[1361]: E1217 11:48:39.420582    1361 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:39 pause-016656 kubelet[1361]: E1217 11:48:39.420605    1361 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:39 pause-016656 kubelet[1361]: W1217 11:48:39.436957    1361 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 11:48:39 pause-016656 kubelet[1361]: W1217 11:48:39.624282    1361 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 11:48:39 pause-016656 kubelet[1361]: W1217 11:48:39.844938    1361 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 11:48:40 pause-016656 kubelet[1361]: W1217 11:48:40.255087    1361 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 11:48:40 pause-016656 kubelet[1361]: E1217 11:48:40.325822    1361 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:40 pause-016656 kubelet[1361]: E1217 11:48:40.325897    1361 kubelet.go:2997] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:40 pause-016656 kubelet[1361]: E1217 11:48:40.421788    1361 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 11:48:40 pause-016656 kubelet[1361]: E1217 11:48:40.421854    1361 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:40 pause-016656 kubelet[1361]: E1217 11:48:40.421870    1361 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:40 pause-016656 kubelet[1361]: W1217 11:48:40.967601    1361 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.335815    1361 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.336090    1361 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.336196    1361 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.336224    1361 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.423211    1361 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.423264    1361 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.423277    1361 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:45 pause-016656 kubelet[1361]: E1217 11:48:45.328119    1361 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Dec 17 11:48:46 pause-016656 kubelet[1361]: I1217 11:48:46.141796    1361 setters.go:543] "Node became not ready" node="pause-016656" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-17T11:48:46Z","lastTransitionTime":"2025-12-17T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"}
	Dec 17 11:48:58 pause-016656 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:48:58 pause-016656 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:48:58 pause-016656 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:48:58 pause-016656 systemd[1]: kubelet.service: Consumed 1.783s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-016656 -n pause-016656
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-016656 -n pause-016656: exit status 2 (398.646872ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-016656 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-016656
helpers_test.go:244: (dbg) docker inspect pause-016656:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2",
	        "Created": "2025-12-17T11:47:54.47179057Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1862828,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:47:54.520733984Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2/hosts",
	        "LogPath": "/var/lib/docker/containers/dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2/dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2-json.log",
	        "Name": "/pause-016656",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-016656:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-016656",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd404e5cbe7a6b0f01bd9ef01f08d473ebe58e54f20f308ba3c3b3a1a8770fe2",
	                "LowerDir": "/var/lib/docker/overlay2/bd3d428ba0e097b6fc22a064564b0abb746a6882e13fa806c206b37b82fd8b9f-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd3d428ba0e097b6fc22a064564b0abb746a6882e13fa806c206b37b82fd8b9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd3d428ba0e097b6fc22a064564b0abb746a6882e13fa806c206b37b82fd8b9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd3d428ba0e097b6fc22a064564b0abb746a6882e13fa806c206b37b82fd8b9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-016656",
	                "Source": "/var/lib/docker/volumes/pause-016656/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-016656",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-016656",
	                "name.minikube.sigs.k8s.io": "pause-016656",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "72d78eaf97a5c8d420468ed41662c91befbda6d6c70147d3cf23fb73511c9069",
	            "SandboxKey": "/var/run/docker/netns/72d78eaf97a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34526"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34527"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34528"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34529"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-016656": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "15db7586bad7d8c2bedb9753fe4609391a6a952673fbd0485dda9dd8c72dc243",
	                    "EndpointID": "cf7a72fd2f4607fdf3731a40db3449c5d9c1bffc136d6c1840dc7773cbc67d7f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "6e:a7:2e:3d:30:fa",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-016656",
	                        "dd404e5cbe7a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-016656 -n pause-016656
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-016656 -n pause-016656: exit status 2 (476.661367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-016656 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-016656 logs -n 25: (1.193375293s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-816702 --cancel-scheduled                                                                                              │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:46 UTC │
	│ stop    │ -p scheduled-stop-816702 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │                     │
	│ stop    │ -p scheduled-stop-816702 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │                     │
	│ stop    │ -p scheduled-stop-816702 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:47 UTC │
	│ delete  │ -p scheduled-stop-816702                                                                                                                 │ scheduled-stop-816702       │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ start   │ -p insufficient-storage-006783 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-006783 │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │                     │
	│ delete  │ -p insufficient-storage-006783                                                                                                           │ insufficient-storage-006783 │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ start   │ -p offline-crio-990385 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-990385         │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p NoKubernetes-057260 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │                     │
	│ start   │ -p pause-016656 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-016656                │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p force-systemd-env-154933 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-154933    │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p NoKubernetes-057260 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p NoKubernetes-057260 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p force-systemd-env-154933                                                                                                              │ force-systemd-env-154933    │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p missing-upgrade-837067 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-837067      │ jenkins │ v1.35.0 │ 17 Dec 25 11:48 UTC │                     │
	│ delete  │ -p offline-crio-990385                                                                                                                   │ offline-crio-990385         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p NoKubernetes-057260                                                                                                                   │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-556754   │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │                     │
	│ start   │ -p pause-016656 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-016656                │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p NoKubernetes-057260 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ ssh     │ -p NoKubernetes-057260 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │                     │
	│ stop    │ -p NoKubernetes-057260                                                                                                                   │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p NoKubernetes-057260 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ pause   │ -p pause-016656 --alsologtostderr -v=5                                                                                                   │ pause-016656                │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │                     │
	│ ssh     │ -p NoKubernetes-057260 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-057260         │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:48:55
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:48:55.293202 1880967 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:48:55.293307 1880967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:48:55.293311 1880967 out.go:374] Setting ErrFile to fd 2...
	I1217 11:48:55.293315 1880967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:48:55.293654 1880967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:48:55.294211 1880967 out.go:368] Setting JSON to false
	I1217 11:48:55.295557 1880967 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":19880,"bootTime":1765952255,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:48:55.295615 1880967 start.go:143] virtualization: kvm guest
	I1217 11:48:55.297678 1880967 out.go:179] * [NoKubernetes-057260] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:48:55.299183 1880967 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:48:55.299228 1880967 notify.go:221] Checking for updates...
	I1217 11:48:55.301758 1880967 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:48:55.302880 1880967 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:48:55.303992 1880967 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:48:55.305041 1880967 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:48:55.306133 1880967 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:48:55.307909 1880967 config.go:182] Loaded profile config "NoKubernetes-057260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1217 11:48:55.308650 1880967 start.go:1806] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1217 11:48:55.308680 1880967 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:48:55.338063 1880967 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:48:55.338180 1880967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:48:55.401421 1880967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:48:55.390106868 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:48:55.401594 1880967 docker.go:319] overlay module found
	I1217 11:48:55.405643 1880967 out.go:179] * Using the docker driver based on existing profile
	I1217 11:48:55.407041 1880967 start.go:309] selected driver: docker
	I1217 11:48:55.407050 1880967 start.go:927] validating driver "docker" against &{Name:NoKubernetes-057260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-057260 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:48:55.407140 1880967 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:48:55.407228 1880967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:48:55.471080 1880967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:48:55.46025678 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:48:55.471751 1880967 cni.go:84] Creating CNI manager for ""
	I1217 11:48:55.471806 1880967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:48:55.471842 1880967 start.go:353] cluster config:
	{Name:NoKubernetes-057260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-057260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:48:55.474628 1880967 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-057260
	I1217 11:48:55.475998 1880967 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:48:55.477438 1880967 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:48:51.637962 1873296 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-556754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:48:51.658752 1873296 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:48:51.663479 1873296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:48:51.677468 1873296 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-556754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-556754 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:48:51.677649 1873296 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 11:48:51.677728 1873296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:48:51.715945 1873296 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:48:51.715967 1873296 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:48:51.716013 1873296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:48:51.745835 1873296 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:48:51.745857 1873296 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:48:51.745864 1873296 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1217 11:48:51.745961 1873296 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-556754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-556754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:48:51.746061 1873296 ssh_runner.go:195] Run: crio config
	I1217 11:48:51.807710 1873296 cni.go:84] Creating CNI manager for ""
	I1217 11:48:51.807744 1873296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:48:51.807769 1873296 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:48:51.807800 1873296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-556754 NodeName:kubernetes-upgrade-556754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:48:51.807993 1873296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-556754"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:48:51.808071 1873296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1217 11:48:51.818206 1873296 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:48:51.818286 1873296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:48:51.828723 1873296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1217 11:48:51.846142 1873296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:48:51.868308 1873296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1217 11:48:51.885174 1873296 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:48:51.889997 1873296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:48:51.901495 1873296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:48:52.004502 1873296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:48:52.030133 1873296 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754 for IP: 192.168.76.2
	I1217 11:48:52.030168 1873296 certs.go:195] generating shared ca certs ...
	I1217 11:48:52.030190 1873296 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.030392 1873296 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:48:52.030462 1873296 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:48:52.030483 1873296 certs.go:257] generating profile certs ...
	I1217 11:48:52.030607 1873296 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.key
	I1217 11:48:52.030637 1873296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.crt with IP's: []
	I1217 11:48:52.147023 1873296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.crt ...
	I1217 11:48:52.147058 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.crt: {Name:mkc959c6f5da50a9e6875645cd8dfd3fd1ed0d7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.147256 1873296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.key ...
	I1217 11:48:52.147280 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.key: {Name:mk72dc23d8193aaaefc7c69187fb3d99e32e8cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.147441 1873296 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key.105d7ac1
	I1217 11:48:52.147467 1873296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt.105d7ac1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 11:48:52.264005 1873296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt.105d7ac1 ...
	I1217 11:48:52.264038 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt.105d7ac1: {Name:mkb6e33c29d04fa9f86243804f7be2b28f1cf3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.264237 1873296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key.105d7ac1 ...
	I1217 11:48:52.264260 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key.105d7ac1: {Name:mkf836cec3f2c4f6cf8a71e1931d736f7c7e6510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.264394 1873296 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt.105d7ac1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt
	I1217 11:48:52.264497 1873296 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key.105d7ac1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key
	I1217 11:48:52.264604 1873296 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.key
	I1217 11:48:52.264631 1873296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.crt with IP's: []
	I1217 11:48:52.482676 1873296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.crt ...
	I1217 11:48:52.482710 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.crt: {Name:mk97e5c46f9c1aead56b893cf3e50b910c7e092e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.482918 1873296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.key ...
	I1217 11:48:52.482938 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.key: {Name:mk46857dd6044bf3327dfcef7008dd29b5c8bbb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:52.483199 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:48:52.483259 1873296 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:48:52.483273 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:48:52.483311 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:48:52.483350 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:48:52.483397 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:48:52.483474 1873296 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:48:52.484076 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:48:52.505793 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:48:52.526242 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:48:52.546247 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:48:52.565645 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 11:48:52.585887 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:48:52.606995 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:48:52.631800 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:48:52.654322 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:48:52.673795 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:48:52.693905 1873296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:48:52.712949 1873296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:48:52.727013 1873296 ssh_runner.go:195] Run: openssl version
	I1217 11:48:52.733678 1873296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:52.742404 1873296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:48:52.751135 1873296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:52.755762 1873296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:52.755818 1873296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:52.797355 1873296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:48:52.807930 1873296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:48:52.818678 1873296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:48:52.827936 1873296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:48:52.833679 1873296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:48:52.833750 1873296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:48:52.871640 1873296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:48:52.880296 1873296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:48:52.888390 1873296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:48:52.896729 1873296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:48:52.901034 1873296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:48:52.901112 1873296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:48:52.938272 1873296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:48:52.947111 1873296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:48:52.951273 1873296 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:48:52.951345 1873296 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-556754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-556754 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:48:52.951446 1873296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:48:52.951508 1873296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:48:52.985984 1873296 cri.go:89] found id: ""
	I1217 11:48:52.986062 1873296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:48:52.995402 1873296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:48:53.004073 1873296 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:48:53.004142 1873296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:48:53.013050 1873296 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:48:53.013072 1873296 kubeadm.go:158] found existing configuration files:
	
	I1217 11:48:53.013112 1873296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:48:53.022036 1873296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:48:53.022128 1873296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:48:53.030683 1873296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:48:53.039388 1873296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:48:53.039450 1873296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:48:53.047717 1873296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:48:53.056289 1873296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:48:53.056346 1873296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:48:53.064969 1873296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:48:53.073439 1873296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:48:53.073500 1873296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:48:53.081875 1873296 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:48:53.178047 1873296 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:48:53.252359 1873296 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:48:55.478904 1880967 preload.go:188] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1217 11:48:55.478957 1880967 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:48:55.503080 1880967 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:48:55.503094 1880967 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	W1217 11:48:55.856615 1880967 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1217 11:48:56.175703 1880967 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1217 11:48:56.175857 1880967 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/NoKubernetes-057260/config.json ...
	I1217 11:48:56.176103 1880967 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:48:56.176134 1880967 start.go:360] acquireMachinesLock for NoKubernetes-057260: {Name:mkd24e14fee7a10014a18938138b94303e4302b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:48:56.176197 1880967 start.go:364] duration metric: took 44.847µs to acquireMachinesLock for "NoKubernetes-057260"
	I1217 11:48:56.176215 1880967 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:48:56.176219 1880967 fix.go:54] fixHost starting: 
	I1217 11:48:56.176436 1880967 cli_runner.go:164] Run: docker container inspect NoKubernetes-057260 --format={{.State.Status}}
	I1217 11:48:56.197703 1880967 fix.go:112] recreateIfNeeded on NoKubernetes-057260: state=Stopped err=<nil>
	W1217 11:48:56.197730 1880967 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 11:48:55.274450 1872100 cli_runner.go:164] Run: docker network inspect missing-upgrade-837067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:48:55.294649 1872100 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 11:48:55.298974 1872100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:48:55.312215 1872100 kubeadm.go:883] updating cluster {Name:missing-upgrade-837067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-837067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:48:55.312392 1872100 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1217 11:48:55.312456 1872100 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:48:55.411376 1872100 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:48:55.411400 1872100 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:48:55.411445 1872100 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:48:55.459197 1872100 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:48:55.459212 1872100 cache_images.go:84] Images are preloaded, skipping loading
	I1217 11:48:55.459219 1872100 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.32.0 crio true true} ...
	I1217 11:48:55.459404 1872100 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=missing-upgrade-837067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-837067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:48:55.459511 1872100 ssh_runner.go:195] Run: crio config
	I1217 11:48:55.515545 1872100 cni.go:84] Creating CNI manager for ""
	I1217 11:48:55.515561 1872100 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:48:55.515572 1872100 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1217 11:48:55.515599 1872100 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-837067 NodeName:missing-upgrade-837067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:48:55.515721 1872100 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-837067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:48:55.515782 1872100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1217 11:48:55.526306 1872100 binaries.go:44] Found k8s binaries, skipping transfer
	I1217 11:48:55.526382 1872100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:48:55.537220 1872100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1217 11:48:55.557563 1872100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:48:55.580312 1872100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1217 11:48:55.600783 1872100 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:48:55.604971 1872100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:48:55.617565 1872100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:48:55.689109 1872100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:48:55.713251 1872100 certs.go:68] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067 for IP: 192.168.103.2
	I1217 11:48:55.713264 1872100 certs.go:194] generating shared ca certs ...
	I1217 11:48:55.713281 1872100 certs.go:226] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:55.713454 1872100 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:48:55.713496 1872100 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:48:55.713502 1872100 certs.go:256] generating profile certs ...
	I1217 11:48:55.713588 1872100 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.key
	I1217 11:48:55.713603 1872100 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.crt with IP's: []
	I1217 11:48:55.768099 1872100 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.crt ...
	I1217 11:48:55.768117 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.crt: {Name:mkcee79ac49c17b92223bf4743ec1fb0439a1ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:55.768304 1872100 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.key ...
	I1217 11:48:55.768315 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/client.key: {Name:mk086ba6497b1537290452fba1e98594d4a81406 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:55.768437 1872100 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key.2fdccbfd
	I1217 11:48:55.768450 1872100 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt.2fdccbfd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 11:48:56.120766 1872100 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt.2fdccbfd ...
	I1217 11:48:56.120785 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt.2fdccbfd: {Name:mk760036d2ff1a511ddb7329819c412b7984f65e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:56.120957 1872100 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key.2fdccbfd ...
	I1217 11:48:56.120966 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key.2fdccbfd: {Name:mk56e1457c3aded83fb895318296ad709a18674d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:56.121039 1872100 certs.go:381] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt.2fdccbfd -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt
	I1217 11:48:56.121114 1872100 certs.go:385] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key.2fdccbfd -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key
	I1217 11:48:56.121163 1872100 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.key
	I1217 11:48:56.121180 1872100 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.crt with IP's: []
	I1217 11:48:56.272503 1872100 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.crt ...
	I1217 11:48:56.272529 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.crt: {Name:mk967b58718d8a7bde433ec21d6553c1ba6ff0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:56.272732 1872100 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.key ...
	I1217 11:48:56.272748 1872100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.key: {Name:mk7870b6035b9fdee1c01962106ebc42fc8e4d93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:48:56.273028 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:48:56.273086 1872100 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:48:56.273100 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:48:56.273126 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:48:56.273150 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:48:56.273174 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:48:56.273231 1872100 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:48:56.274225 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:48:56.304509 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:48:56.343224 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:48:56.371722 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:48:56.397902 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 11:48:56.426326 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 11:48:56.457096 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:48:56.491518 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/missing-upgrade-837067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 11:48:56.524800 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:48:56.561392 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:48:56.590585 1872100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:48:56.619283 1872100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:48:56.640956 1872100 ssh_runner.go:195] Run: openssl version
	I1217 11:48:56.647048 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16729412.pem && ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem"
	I1217 11:48:56.657900 1872100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:48:56.661719 1872100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:48:56.661763 1872100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:48:56.669464 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16729412.pem /etc/ssl/certs/3ec20f2e.0"
	I1217 11:48:56.682527 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1217 11:48:56.698968 1872100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:56.704862 1872100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:56.704921 1872100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:48:56.713429 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1217 11:48:56.728761 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1672941.pem && ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem"
	I1217 11:48:56.744725 1872100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:48:56.749717 1872100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:48:56.749776 1872100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:48:56.756950 1872100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1672941.pem /etc/ssl/certs/51391683.0"
	I1217 11:48:56.767959 1872100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:48:56.772766 1872100 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:48:56.772837 1872100 kubeadm.go:392] StartCluster: {Name:missing-upgrade-837067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-837067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:48:56.772949 1872100 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:48:56.773008 1872100 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:48:56.820919 1872100 cri.go:89] found id: ""
	I1217 11:48:56.820982 1872100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:48:56.834299 1872100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:48:56.845344 1872100 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:48:56.845401 1872100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:48:56.856336 1872100 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:48:56.856382 1872100 kubeadm.go:157] found existing configuration files:
	
	I1217 11:48:56.856437 1872100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:48:56.866637 1872100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:48:56.866693 1872100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:48:56.876278 1872100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:48:56.887158 1872100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:48:56.887233 1872100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:48:56.896866 1872100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:48:56.906577 1872100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:48:56.906648 1872100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:48:56.915749 1872100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:48:56.925204 1872100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:48:56.925270 1872100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:48:56.935527 1872100 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:48:56.976919 1872100 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1217 11:48:56.976964 1872100 kubeadm.go:310] [preflight] Running pre-flight checks
	I1217 11:48:56.996735 1872100 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:48:56.996830 1872100 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:48:56.996886 1872100 kubeadm.go:310] OS: Linux
	I1217 11:48:56.996962 1872100 kubeadm.go:310] CGROUPS_CPU: enabled
	I1217 11:48:56.997043 1872100 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1217 11:48:56.997113 1872100 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1217 11:48:56.997174 1872100 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1217 11:48:56.997233 1872100 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1217 11:48:56.997306 1872100 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1217 11:48:56.997366 1872100 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1217 11:48:56.997423 1872100 kubeadm.go:310] CGROUPS_IO: enabled
	I1217 11:48:57.056092 1872100 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:48:57.056246 1872100 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:48:57.056374 1872100 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:48:57.063888 1872100 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1217 11:48:54.414905 1873860 node_ready.go:57] node "pause-016656" has "Ready":"False" status (will retry)
	I1217 11:48:56.414657 1873860 node_ready.go:49] node "pause-016656" is "Ready"
	I1217 11:48:56.414693 1873860 node_ready.go:38] duration metric: took 10.003916334s for node "pause-016656" to be "Ready" ...
	I1217 11:48:56.414712 1873860 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:48:56.414770 1873860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:48:56.428005 1873860 api_server.go:72] duration metric: took 10.149002131s to wait for apiserver process to appear ...
	I1217 11:48:56.428033 1873860 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:48:56.428058 1873860 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 11:48:56.432577 1873860 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 11:48:56.433706 1873860 api_server.go:141] control plane version: v1.34.3
	I1217 11:48:56.433740 1873860 api_server.go:131] duration metric: took 5.698165ms to wait for apiserver health ...
	I1217 11:48:56.433752 1873860 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:48:56.437506 1873860 system_pods.go:59] 7 kube-system pods found
	I1217 11:48:56.437613 1873860 system_pods.go:61] "coredns-66bc5c9577-xcwn4" [fd7eeebf-c0c0-4924-9c0f-c6270ee45be8] Running
	I1217 11:48:56.437631 1873860 system_pods.go:61] "etcd-pause-016656" [3f471de9-5bd5-4796-8782-4ae345738b9c] Running
	I1217 11:48:56.437638 1873860 system_pods.go:61] "kindnet-m9tqf" [51e023f8-bdf8-4fd1-8ad5-dc0d157fcf38] Running
	I1217 11:48:56.437643 1873860 system_pods.go:61] "kube-apiserver-pause-016656" [411e886f-e937-4300-b262-54bf7c427a81] Running
	I1217 11:48:56.437659 1873860 system_pods.go:61] "kube-controller-manager-pause-016656" [aac2771a-b74c-4734-8e4a-a54d34aca8b4] Running
	I1217 11:48:56.437683 1873860 system_pods.go:61] "kube-proxy-9gv76" [ebdee9ea-1b73-4865-a8b5-72f039f8bb34] Running
	I1217 11:48:56.437692 1873860 system_pods.go:61] "kube-scheduler-pause-016656" [68a7444d-6f18-4569-8d0b-7dabd2494695] Running
	I1217 11:48:56.437700 1873860 system_pods.go:74] duration metric: took 3.940536ms to wait for pod list to return data ...
	I1217 11:48:56.437710 1873860 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:48:56.440166 1873860 default_sa.go:45] found service account: "default"
	I1217 11:48:56.440187 1873860 default_sa.go:55] duration metric: took 2.466906ms for default service account to be created ...
	I1217 11:48:56.440197 1873860 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:48:56.443184 1873860 system_pods.go:86] 7 kube-system pods found
	I1217 11:48:56.443212 1873860 system_pods.go:89] "coredns-66bc5c9577-xcwn4" [fd7eeebf-c0c0-4924-9c0f-c6270ee45be8] Running
	I1217 11:48:56.443219 1873860 system_pods.go:89] "etcd-pause-016656" [3f471de9-5bd5-4796-8782-4ae345738b9c] Running
	I1217 11:48:56.443223 1873860 system_pods.go:89] "kindnet-m9tqf" [51e023f8-bdf8-4fd1-8ad5-dc0d157fcf38] Running
	I1217 11:48:56.443228 1873860 system_pods.go:89] "kube-apiserver-pause-016656" [411e886f-e937-4300-b262-54bf7c427a81] Running
	I1217 11:48:56.443233 1873860 system_pods.go:89] "kube-controller-manager-pause-016656" [aac2771a-b74c-4734-8e4a-a54d34aca8b4] Running
	I1217 11:48:56.443238 1873860 system_pods.go:89] "kube-proxy-9gv76" [ebdee9ea-1b73-4865-a8b5-72f039f8bb34] Running
	I1217 11:48:56.443243 1873860 system_pods.go:89] "kube-scheduler-pause-016656" [68a7444d-6f18-4569-8d0b-7dabd2494695] Running
	I1217 11:48:56.443252 1873860 system_pods.go:126] duration metric: took 3.048235ms to wait for k8s-apps to be running ...
	I1217 11:48:56.443264 1873860 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:48:56.443309 1873860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:48:56.469571 1873860 system_svc.go:56] duration metric: took 26.28868ms WaitForService to wait for kubelet
	I1217 11:48:56.469614 1873860 kubeadm.go:587] duration metric: took 10.190617868s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:48:56.469637 1873860 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:48:56.472975 1873860 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:48:56.473010 1873860 node_conditions.go:123] node cpu capacity is 8
	I1217 11:48:56.473039 1873860 node_conditions.go:105] duration metric: took 3.39449ms to run NodePressure ...
	I1217 11:48:56.473055 1873860 start.go:242] waiting for startup goroutines ...
	I1217 11:48:56.473076 1873860 start.go:247] waiting for cluster config update ...
	I1217 11:48:56.473086 1873860 start.go:256] writing updated cluster config ...
	I1217 11:48:56.473457 1873860 ssh_runner.go:195] Run: rm -f paused
	I1217 11:48:56.478494 1873860 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:48:56.479061 1873860 kapi.go:59] client config for pause-016656: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/pause-016656/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/pause-016656/client.key", CAFile:"/home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 11:48:56.482711 1873860 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xcwn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.489006 1873860 pod_ready.go:94] pod "coredns-66bc5c9577-xcwn4" is "Ready"
	I1217 11:48:56.489037 1873860 pod_ready.go:86] duration metric: took 6.300583ms for pod "coredns-66bc5c9577-xcwn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.491561 1873860 pod_ready.go:83] waiting for pod "etcd-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.497172 1873860 pod_ready.go:94] pod "etcd-pause-016656" is "Ready"
	I1217 11:48:56.497202 1873860 pod_ready.go:86] duration metric: took 5.614635ms for pod "etcd-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.499515 1873860 pod_ready.go:83] waiting for pod "kube-apiserver-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.504066 1873860 pod_ready.go:94] pod "kube-apiserver-pause-016656" is "Ready"
	I1217 11:48:56.504092 1873860 pod_ready.go:86] duration metric: took 4.530028ms for pod "kube-apiserver-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.506324 1873860 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:56.882937 1873860 pod_ready.go:94] pod "kube-controller-manager-pause-016656" is "Ready"
	I1217 11:48:56.882967 1873860 pod_ready.go:86] duration metric: took 376.622034ms for pod "kube-controller-manager-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:57.083154 1873860 pod_ready.go:83] waiting for pod "kube-proxy-9gv76" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:57.065749 1872100 out.go:235]   - Generating certificates and keys ...
	I1217 11:48:57.065844 1872100 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1217 11:48:57.065910 1872100 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1217 11:48:57.231593 1872100 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:48:57.295295 1872100 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:48:57.559278 1872100 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:48:57.483244 1873860 pod_ready.go:94] pod "kube-proxy-9gv76" is "Ready"
	I1217 11:48:57.483277 1873860 pod_ready.go:86] duration metric: took 400.093046ms for pod "kube-proxy-9gv76" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:57.683122 1873860 pod_ready.go:83] waiting for pod "kube-scheduler-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:58.082861 1873860 pod_ready.go:94] pod "kube-scheduler-pause-016656" is "Ready"
	I1217 11:48:58.082894 1873860 pod_ready.go:86] duration metric: took 399.744839ms for pod "kube-scheduler-pause-016656" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:48:58.082910 1873860 pod_ready.go:40] duration metric: took 1.604332629s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:48:58.134637 1873860 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:48:58.137086 1873860 out.go:179] * Done! kubectl is now configured to use "pause-016656" cluster and "default" namespace by default
	I1217 11:48:57.883956 1872100 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1217 11:48:57.957378 1872100 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1217 11:48:57.957500 1872100 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost missing-upgrade-837067] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 11:48:58.103357 1872100 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1217 11:48:58.103599 1872100 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost missing-upgrade-837067] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 11:48:58.246089 1872100 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:48:58.407262 1872100 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:48:58.562339 1872100 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1217 11:48:58.562685 1872100 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:48:58.808055 1872100 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:48:58.971099 1872100 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:48:59.319690 1872100 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:48:59.671158 1872100 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:48:59.847253 1872100 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:48:59.847966 1872100 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:48:59.852363 1872100 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:48:56.199764 1880967 out.go:252] * Restarting existing docker container for "NoKubernetes-057260" ...
	I1217 11:48:56.199937 1880967 cli_runner.go:164] Run: docker start NoKubernetes-057260
	I1217 11:48:56.471401 1880967 cli_runner.go:164] Run: docker container inspect NoKubernetes-057260 --format={{.State.Status}}
	I1217 11:48:56.495268 1880967 kic.go:430] container "NoKubernetes-057260" state is running.
	I1217 11:48:56.495765 1880967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-057260
	I1217 11:48:56.518492 1880967 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/NoKubernetes-057260/config.json ...
	I1217 11:48:56.518756 1880967 machine.go:94] provisionDockerMachine start ...
	I1217 11:48:56.518812 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:48:56.540933 1880967 main.go:143] libmachine: Using SSH client type: native
	I1217 11:48:56.541179 1880967 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34546 <nil> <nil>}
	I1217 11:48:56.541186 1880967 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:48:56.542018 1880967 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52294->127.0.0.1:34546: read: connection reset by peer
	I1217 11:48:59.687934 1880967 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-057260
	
	I1217 11:48:59.687956 1880967 ubuntu.go:182] provisioning hostname "NoKubernetes-057260"
	I1217 11:48:59.688031 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:48:59.713437 1880967 main.go:143] libmachine: Using SSH client type: native
	I1217 11:48:59.713824 1880967 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34546 <nil> <nil>}
	I1217 11:48:59.713835 1880967 main.go:143] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-057260 && echo "NoKubernetes-057260" | sudo tee /etc/hostname
	I1217 11:48:59.871427 1880967 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-057260
	
	I1217 11:48:59.871508 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:48:59.894905 1880967 main.go:143] libmachine: Using SSH client type: native
	I1217 11:48:59.895376 1880967 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34546 <nil> <nil>}
	I1217 11:48:59.895407 1880967 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-057260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-057260/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-057260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:49:00.033746 1880967 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:49:00.033767 1880967 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:49:00.033792 1880967 ubuntu.go:190] setting up certificates
	I1217 11:49:00.033804 1880967 provision.go:84] configureAuth start
	I1217 11:49:00.033886 1880967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-057260
	I1217 11:49:00.055027 1880967 provision.go:143] copyHostCerts
	I1217 11:49:00.055096 1880967 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:49:00.055107 1880967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:49:00.055165 1880967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:49:00.055285 1880967 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:49:00.055291 1880967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:49:00.055335 1880967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:49:00.055444 1880967 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:49:00.055451 1880967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:49:00.055490 1880967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:49:00.055640 1880967 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-057260 san=[127.0.0.1 192.168.85.2 NoKubernetes-057260 localhost minikube]
	I1217 11:49:00.101381 1880967 provision.go:177] copyRemoteCerts
	I1217 11:49:00.101429 1880967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:49:00.101471 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:49:00.119696 1880967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34546 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/NoKubernetes-057260/id_rsa Username:docker}
	I1217 11:49:00.214511 1880967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:49:00.234098 1880967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1217 11:49:00.252391 1880967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:49:00.270148 1880967 provision.go:87] duration metric: took 236.329499ms to configureAuth
	I1217 11:49:00.270171 1880967 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:49:00.270401 1880967 config.go:182] Loaded profile config "NoKubernetes-057260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1217 11:49:00.270546 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:49:00.289670 1880967 main.go:143] libmachine: Using SSH client type: native
	I1217 11:49:00.289999 1880967 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34546 <nil> <nil>}
	I1217 11:49:00.290017 1880967 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:49:01.915587 1873296 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1217 11:49:01.915670 1873296 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:49:01.915779 1873296 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:49:01.915845 1873296 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:49:01.915893 1873296 kubeadm.go:319] OS: Linux
	I1217 11:49:01.916040 1873296 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:49:01.916144 1873296 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:49:01.916360 1873296 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:49:01.916437 1873296 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:49:01.916521 1873296 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:49:01.916611 1873296 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:49:01.916710 1873296 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:49:01.916771 1873296 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 11:49:01.916866 1873296 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:49:01.916996 1873296 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:49:01.917120 1873296 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1217 11:49:01.917203 1873296 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:49:01.919237 1873296 out.go:252]   - Generating certificates and keys ...
	I1217 11:49:01.919343 1873296 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:49:01.919443 1873296 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:49:01.919545 1873296 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:49:01.919629 1873296 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:49:01.919713 1873296 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:49:01.919782 1873296 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:49:01.919855 1873296 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:49:01.920037 1873296 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-556754 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:49:01.920110 1873296 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:49:01.920286 1873296 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-556754 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:49:01.920372 1873296 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:49:01.920463 1873296 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:49:01.920528 1873296 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:49:01.920623 1873296 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:49:01.920690 1873296 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:49:01.920764 1873296 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:49:01.920852 1873296 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:49:01.920934 1873296 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:49:01.921048 1873296 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:49:01.921141 1873296 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:49:01.922699 1873296 out.go:252]   - Booting up control plane ...
	I1217 11:49:01.922877 1873296 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:49:01.923037 1873296 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:49:01.923216 1873296 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:49:01.923403 1873296 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:49:01.923542 1873296 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:49:01.923606 1873296 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:49:01.923825 1873296 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1217 11:49:01.923951 1873296 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502254 seconds
	I1217 11:49:01.924089 1873296 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:49:01.924307 1873296 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:49:01.924425 1873296 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:49:01.924726 1873296 kubeadm.go:319] [mark-control-plane] Marking the node kubernetes-upgrade-556754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:49:01.924816 1873296 kubeadm.go:319] [bootstrap-token] Using token: 7aaes4.qyjci288xgwnnuy8
	I1217 11:49:01.926492 1873296 out.go:252]   - Configuring RBAC rules ...
	I1217 11:49:01.926693 1873296 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:49:01.926824 1873296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:49:01.927052 1873296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:49:01.927235 1873296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:49:01.927418 1873296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:49:01.927687 1873296 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:49:01.927862 1873296 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:49:01.927938 1873296 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:49:01.928030 1873296 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:49:01.928043 1873296 kubeadm.go:319] 
	I1217 11:49:01.928120 1873296 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:49:01.928126 1873296 kubeadm.go:319] 
	I1217 11:49:01.928216 1873296 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:49:01.928223 1873296 kubeadm.go:319] 
	I1217 11:49:01.928255 1873296 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:49:01.928330 1873296 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:49:01.928403 1873296 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:49:01.928413 1873296 kubeadm.go:319] 
	I1217 11:49:01.928492 1873296 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:49:01.928501 1873296 kubeadm.go:319] 
	I1217 11:49:01.928580 1873296 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:49:01.928586 1873296 kubeadm.go:319] 
	I1217 11:49:01.928654 1873296 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:49:01.928760 1873296 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:49:01.928854 1873296 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:49:01.928861 1873296 kubeadm.go:319] 
	I1217 11:49:01.928981 1873296 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:49:01.929104 1873296 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:49:01.929120 1873296 kubeadm.go:319] 
	I1217 11:49:01.929246 1873296 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7aaes4.qyjci288xgwnnuy8 \
	I1217 11:49:01.929433 1873296 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:49:01.929475 1873296 kubeadm.go:319] 	--control-plane 
	I1217 11:49:01.929486 1873296 kubeadm.go:319] 
	I1217 11:49:01.929682 1873296 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:49:01.929705 1873296 kubeadm.go:319] 
	I1217 11:49:01.929874 1873296 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7aaes4.qyjci288xgwnnuy8 \
	I1217 11:49:01.930100 1873296 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:49:01.930116 1873296 cni.go:84] Creating CNI manager for ""
	I1217 11:49:01.930126 1873296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:49:01.931953 1873296 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 11:48:59.853876 1872100 out.go:235]   - Booting up control plane ...
	I1217 11:48:59.853996 1872100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:48:59.854084 1872100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:48:59.854979 1872100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:48:59.867907 1872100 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:48:59.876543 1872100 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:48:59.876602 1872100 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1217 11:48:59.978512 1872100 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:48:59.978713 1872100 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:49:00.480893 1872100 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.143753ms
	I1217 11:49:00.480995 1872100 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1217 11:49:01.937905 1873296 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:49:01.943005 1873296 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1217 11:49:01.943026 1873296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:49:01.961966 1873296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:49:02.863100 1873296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:49:02.863176 1873296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:49:02.863181 1873296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-556754 minikube.k8s.io/updated_at=2025_12_17T11_49_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=kubernetes-upgrade-556754 minikube.k8s.io/primary=true
	I1217 11:49:02.878752 1873296 ops.go:34] apiserver oom_adj: -16
	I1217 11:49:03.035642 1873296 kubeadm.go:1114] duration metric: took 172.53721ms to wait for elevateKubeSystemPrivileges
	I1217 11:49:03.035744 1873296 kubeadm.go:403] duration metric: took 10.084400231s to StartCluster
	I1217 11:49:03.035771 1873296 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:49:03.035866 1873296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:49:03.037241 1873296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:49:03.037525 1873296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:49:03.037999 1873296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:49:03.037917 1873296 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:49:03.038487 1873296 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-556754"
	I1217 11:49:03.038511 1873296 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-556754"
	I1217 11:49:03.038560 1873296 host.go:66] Checking if "kubernetes-upgrade-556754" exists ...
	I1217 11:49:03.038945 1873296 config.go:182] Loaded profile config "kubernetes-upgrade-556754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 11:49:03.039002 1873296 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-556754"
	I1217 11:49:03.039017 1873296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-556754"
	I1217 11:49:03.039118 1873296 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-556754 --format={{.State.Status}}
	I1217 11:49:03.039267 1873296 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-556754 --format={{.State.Status}}
	I1217 11:49:03.042546 1873296 out.go:179] * Verifying Kubernetes components...
	I1217 11:49:03.044145 1873296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:49:03.069508 1873296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:49:00.566447 1880967 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:49:00.566465 1880967 machine.go:97] duration metric: took 4.047698853s to provisionDockerMachine
	I1217 11:49:00.566480 1880967 start.go:293] postStartSetup for "NoKubernetes-057260" (driver="docker")
	I1217 11:49:00.566492 1880967 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:49:00.566583 1880967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:49:00.566630 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:49:00.586915 1880967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34546 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/NoKubernetes-057260/id_rsa Username:docker}
	I1217 11:49:00.684768 1880967 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:49:00.688756 1880967 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:49:00.688774 1880967 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:49:00.688784 1880967 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:49:00.688849 1880967 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:49:00.688954 1880967 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:49:00.689070 1880967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:49:00.699024 1880967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:49:00.717259 1880967 start.go:296] duration metric: took 150.765308ms for postStartSetup
	I1217 11:49:00.717334 1880967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:49:00.717370 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:49:00.737770 1880967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34546 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/NoKubernetes-057260/id_rsa Username:docker}
	I1217 11:49:00.831299 1880967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:49:00.836519 1880967 fix.go:56] duration metric: took 4.660289975s for fixHost
	I1217 11:49:00.836558 1880967 start.go:83] releasing machines lock for "NoKubernetes-057260", held for 4.660353245s
	I1217 11:49:00.836639 1880967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-057260
	I1217 11:49:00.859052 1880967 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:49:00.859097 1880967 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:49:00.859103 1880967 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:49:00.859131 1880967 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:49:00.859151 1880967 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:49:00.859172 1880967 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:49:00.859225 1880967 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:49:00.859293 1880967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:49:00.859335 1880967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-057260
	I1217 11:49:00.879335 1880967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34546 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/NoKubernetes-057260/id_rsa Username:docker}
	I1217 11:49:01.009834 1880967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:49:01.043759 1880967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:49:01.069744 1880967 ssh_runner.go:195] Run: openssl version
	I1217 11:49:01.078091 1880967 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:49:01.088726 1880967 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:49:01.100092 1880967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:49:01.105288 1880967 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:49:01.105338 1880967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:49:01.150915 1880967 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:49:01.160383 1880967 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:49:01.169100 1880967 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:49:01.178750 1880967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:49:01.183918 1880967 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:49:01.183972 1880967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:49:01.226819 1880967 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:49:01.238091 1880967 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:49:01.250390 1880967 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:49:01.261350 1880967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:49:01.267105 1880967 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:49:01.267193 1880967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:49:01.312979 1880967 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:49:01.321938 1880967 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:49:01.325751 1880967 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:49:01.329924 1880967 ssh_runner.go:195] Run: cat /version.json
	I1217 11:49:01.330016 1880967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:49:01.334237 1880967 ssh_runner.go:195] Run: systemctl --version
	I1217 11:49:01.393776 1880967 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:49:01.438298 1880967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:49:01.443800 1880967 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:49:01.443869 1880967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:49:01.453381 1880967 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 11:49:01.453404 1880967 start.go:496] detecting cgroup driver to use...
	I1217 11:49:01.453460 1880967 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:49:01.453607 1880967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:49:01.469657 1880967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:49:01.486115 1880967 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:49:01.486167 1880967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:49:01.506394 1880967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:49:01.524021 1880967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:49:01.656002 1880967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:49:01.780331 1880967 docker.go:234] disabling docker service ...
	I1217 11:49:01.780400 1880967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:49:01.809320 1880967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:49:01.830634 1880967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:49:01.973772 1880967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:49:02.090693 1880967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:49:02.109697 1880967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:49:02.139584 1880967 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1217 11:49:02.690700 1880967 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1217 11:49:02.690755 1880967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:49:02.703081 1880967 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:49:02.703148 1880967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:49:02.714103 1880967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:49:02.723714 1880967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:49:02.734631 1880967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:49:02.748940 1880967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:49:02.761396 1880967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:49:02.777737 1880967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:49:02.908847 1880967 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:49:03.134784 1880967 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:49:03.134837 1880967 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:49:03.139700 1880967 start.go:564] Will wait 60s for crictl version
	I1217 11:49:03.139783 1880967 ssh_runner.go:195] Run: which crictl
	I1217 11:49:03.145237 1880967 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:49:03.186140 1880967 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:49:03.186219 1880967 ssh_runner.go:195] Run: crio --version
	I1217 11:49:03.233161 1880967 ssh_runner.go:195] Run: crio --version
	I1217 11:49:03.286264 1880967 out.go:179] * Preparing CRI-O 1.34.3 ...
	I1217 11:49:03.291974 1880967 ssh_runner.go:195] Run: rm -f paused
	I1217 11:49:03.301395 1880967 out.go:179] * Done! minikube is ready without Kubernetes!
	I1217 11:49:03.305361 1880967 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.906280522Z" level=info msg="RDT not available in the host system"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.906298257Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.907364927Z" level=info msg="Conmon does support the --sync option"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.907393198Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.90742119Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.908252768Z" level=info msg="Conmon does support the --sync option"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.908276981Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.912918052Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.912946172Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.914014004Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.914572867Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 11:48:44 pause-016656 crio[2311]: time="2025-12-17T11:48:44.91464142Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.010756075Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-xcwn4 Namespace:kube-system ID:c613cd5a14f9e6c991632c731eb15db0bd1f8a28c52ff6822d5584af33722bd2 UID:fd7eeebf-c0c0-4924-9c0f-c6270ee45be8 NetNS:/var/run/netns/b95ef1ca-b11f-4ac3-bd94-4261b419a7a8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a5a8}] Aliases:map[]}"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.01095944Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-xcwn4 for CNI network kindnet (type=ptp)"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011351599Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011389305Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011444352Z" level=info msg="Create NRI interface"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011595754Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011613128Z" level=info msg="runtime interface created"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011628265Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011640478Z" level=info msg="runtime interface starting up..."
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011648368Z" level=info msg="starting plugins..."
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.011667633Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 11:48:45 pause-016656 crio[2311]: time="2025-12-17T11:48:45.012069996Z" level=info msg="No systemd watchdog enabled"
	Dec 17 11:48:45 pause-016656 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	738b93ea18f54       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     28 seconds ago      Running             coredns                   0                   c613cd5a14f9e       coredns-66bc5c9577-xcwn4               kube-system
	849a0e31e1a02       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   40 seconds ago      Running             kindnet-cni               0                   0edf52eafd8d3       kindnet-m9tqf                          kube-system
	6783c23b649da       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     42 seconds ago      Running             kube-proxy                0                   c554107b48c99       kube-proxy-9gv76                       kube-system
	f5667db940ec1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     54 seconds ago      Running             kube-controller-manager   0                   b7852f3a396ca       kube-controller-manager-pause-016656   kube-system
	af6a64f34f500       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     54 seconds ago      Running             etcd                      0                   941d8f4a270ba       etcd-pause-016656                      kube-system
	0693ab25679d5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     54 seconds ago      Running             kube-apiserver            0                   380e2c7df2e68       kube-apiserver-pause-016656            kube-system
	2998ec06acced       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     54 seconds ago      Running             kube-scheduler            0                   ea75785a44953       kube-scheduler-pause-016656            kube-system
	
	
	==> coredns [738b93ea18f545c95fceec6b5cc86d44e7222917e68ec3f742e523eda4b33f63] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53425 - 46771 "HINFO IN 7261628847673218474.7996560943190306420. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03799586s
	
	
	==> describe nodes <==
	Name:               pause-016656
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-016656
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=pause-016656
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_48_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:48:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-016656
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:48:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:48:56 +0000   Wed, 17 Dec 2025 11:48:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:48:56 +0000   Wed, 17 Dec 2025 11:48:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:48:56 +0000   Wed, 17 Dec 2025 11:48:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:48:56 +0000   Wed, 17 Dec 2025 11:48:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-016656
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                714c64a1-e138-4bdb-af56-24cbdbc1efaa
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xcwn4                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     44s
	  kube-system                 etcd-pause-016656                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         49s
	  kube-system                 kindnet-m9tqf                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      44s
	  kube-system                 kube-apiserver-pause-016656             250m (3%)     0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-controller-manager-pause-016656    200m (2%)     0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-proxy-9gv76                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-scheduler-pause-016656             100m (1%)     0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 42s               kube-proxy       
	  Normal  Starting                 49s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s               kubelet          Node pause-016656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s               kubelet          Node pause-016656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s               kubelet          Node pause-016656 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s               node-controller  Node pause-016656 event: Registered Node pause-016656 in Controller
	  Normal  NodeNotReady             18s               kubelet          Node pause-016656 status is now: NodeNotReady
	  Normal  NodeReady                8s (x2 over 30s)  kubelet          Node pause-016656 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [af6a64f34f500a3de7067fe3192f7f7f925bc08286bfed53e0f722f0b96a037c] <==
	{"level":"warn","ts":"2025-12-17T11:48:11.889130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.896474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.903755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.910618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.917827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.934967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.941609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.948923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:48:11.999896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46172","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T11:48:34.304755Z","caller":"traceutil/trace.go:172","msg":"trace[150989265] linearizableReadLoop","detail":"{readStateIndex:400; appliedIndex:400; }","duration":"100.515878ms","start":"2025-12-17T11:48:34.204214Z","end":"2025-12-17T11:48:34.304730Z","steps":["trace[150989265] 'read index received'  (duration: 100.49466ms)","trace[150989265] 'applied index is now lower than readState.Index'  (duration: 18.872µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:48:34.309341Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.082435ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.94.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-12-17T11:48:34.309442Z","caller":"traceutil/trace.go:172","msg":"trace[465844435] range","detail":"{range_begin:/registry/masterleases/192.168.94.2; range_end:; response_count:1; response_revision:386; }","duration":"105.22609ms","start":"2025-12-17T11:48:34.204202Z","end":"2025-12-17T11:48:34.309428Z","steps":["trace[465844435] 'agreement among raft nodes before linearized reading'  (duration: 100.615545ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.309921Z","caller":"traceutil/trace.go:172","msg":"trace[546769526] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"166.590982ms","start":"2025-12-17T11:48:34.143310Z","end":"2025-12-17T11:48:34.309901Z","steps":["trace[546769526] 'process raft request'  (duration: 161.444675ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.472238Z","caller":"traceutil/trace.go:172","msg":"trace[1199757923] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:402; }","duration":"122.106605ms","start":"2025-12-17T11:48:34.350105Z","end":"2025-12-17T11:48:34.472211Z","steps":["trace[1199757923] 'read index received'  (duration: 122.087466ms)","trace[1199757923] 'applied index is now lower than readState.Index'  (duration: 17.958µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:48:34.478901Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.772032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:48:34.478966Z","caller":"traceutil/trace.go:172","msg":"trace[1905792807] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:387; }","duration":"128.861414ms","start":"2025-12-17T11:48:34.350092Z","end":"2025-12-17T11:48:34.478953Z","steps":["trace[1905792807] 'agreement among raft nodes before linearized reading'  (duration: 122.217473ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.478991Z","caller":"traceutil/trace.go:172","msg":"trace[1342931433] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"156.924302ms","start":"2025-12-17T11:48:34.322057Z","end":"2025-12-17T11:48:34.478982Z","steps":["trace[1342931433] 'process raft request'  (duration: 150.213792ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.480575Z","caller":"traceutil/trace.go:172","msg":"trace[1570635261] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"133.294999ms","start":"2025-12-17T11:48:34.347262Z","end":"2025-12-17T11:48:34.480557Z","steps":["trace[1570635261] 'process raft request'  (duration: 132.933202ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.615165Z","caller":"traceutil/trace.go:172","msg":"trace[957408065] linearizableReadLoop","detail":"{readStateIndex:404; appliedIndex:404; }","duration":"129.956406ms","start":"2025-12-17T11:48:34.485187Z","end":"2025-12-17T11:48:34.615143Z","steps":["trace[957408065] 'read index received'  (duration: 129.948086ms)","trace[957408065] 'applied index is now lower than readState.Index'  (duration: 7.138µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:48:34.691440Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.235366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-12-17T11:48:34.691498Z","caller":"traceutil/trace.go:172","msg":"trace[405206821] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:389; }","duration":"206.300639ms","start":"2025-12-17T11:48:34.485181Z","end":"2025-12-17T11:48:34.691482Z","steps":["trace[405206821] 'agreement among raft nodes before linearized reading'  (duration: 130.052557ms)","trace[405206821] 'range keys from in-memory index tree'  (duration: 76.135088ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:48:34.691620Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.343841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:48:34.691679Z","caller":"traceutil/trace.go:172","msg":"trace[583612712] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:390; }","duration":"206.413963ms","start":"2025-12-17T11:48:34.485251Z","end":"2025-12-17T11:48:34.691665Z","steps":["trace[583612712] 'agreement among raft nodes before linearized reading'  (duration: 206.319109ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:48:34.691730Z","caller":"traceutil/trace.go:172","msg":"trace[1939106350] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"207.975684ms","start":"2025-12-17T11:48:34.483734Z","end":"2025-12-17T11:48:34.691709Z","steps":["trace[1939106350] 'process raft request'  (duration: 131.449812ms)","trace[1939106350] 'compare'  (duration: 76.255847ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T11:48:34.881717Z","caller":"traceutil/trace.go:172","msg":"trace[768152905] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"154.819671ms","start":"2025-12-17T11:48:34.726876Z","end":"2025-12-17T11:48:34.881695Z","steps":["trace[768152905] 'process raft request'  (duration: 74.678185ms)","trace[768152905] 'compare'  (duration: 80.019133ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:49:04 up  5:31,  0 user,  load average: 4.55, 2.07, 1.47
	Linux pause-016656 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [849a0e31e1a02f7d874df40c5247298424b7cc7ecc4c6af63bddcf02ed5b3bf5] <==
	I1217 11:48:23.706397       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:48:23.706680       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 11:48:23.706813       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:48:23.706834       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:48:23.706854       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:48:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:48:23.911411       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:48:23.911471       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:48:23.911484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:48:24.004106       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:48:24.255919       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:48:24.255948       1 metrics.go:72] Registering metrics
	I1217 11:48:24.256020       1 controller.go:711] "Syncing nftables rules"
	I1217 11:48:33.918271       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:48:33.918340       1 main.go:301] handling current node
	I1217 11:48:44.003523       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:48:44.003587       1 main.go:301] handling current node
	I1217 11:48:53.917655       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:48:53.917706       1 main.go:301] handling current node
	I1217 11:49:03.916579       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:49:03.916622       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0693ab25679d5756c48ca38263cfd8995d66c84715097772ae29f18b72bdf1a7] <==
	I1217 11:48:12.504035       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1217 11:48:12.504122       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 11:48:12.504511       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 11:48:12.509203       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 11:48:12.509284       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:48:12.516027       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:48:12.516283       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 11:48:12.695335       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:48:13.405457       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 11:48:13.409846       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 11:48:13.409866       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:48:14.089442       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:48:14.132327       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:48:14.210035       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 11:48:14.216488       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1217 11:48:14.217642       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:48:14.221730       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:48:14.442460       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:48:15.360990       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:48:15.382798       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 11:48:15.418828       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 11:48:20.197002       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 11:48:20.294903       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:48:20.498063       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:48:20.512803       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [f5667db940ec1747e843db665e3b2bb01474456533fe3fcb8120230f3d8fbea4] <==
	I1217 11:48:19.443784       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 11:48:19.443865       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 11:48:19.443948       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 11:48:19.443949       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 11:48:19.443995       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 11:48:19.446268       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 11:48:19.447406       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 11:48:19.448594       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 11:48:19.448741       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:48:19.448759       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 11:48:19.448767       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 11:48:19.448921       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 11:48:19.449000       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 11:48:19.449061       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 11:48:19.449076       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 11:48:19.449083       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 11:48:19.456181       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 11:48:19.456349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:48:19.456858       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 11:48:19.457407       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-016656" podCIDRs=["10.244.0.0/24"]
	I1217 11:48:19.466483       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 11:48:19.467697       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:48:34.396318       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1217 11:48:49.399020       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 11:48:59.400143       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6783c23b649da94f218090448e20634bd08ed8613ee0fc4970baf0710d1cb37a] <==
	I1217 11:48:21.209864       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:48:21.287657       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:48:21.388040       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:48:21.388084       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 11:48:21.388172       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:48:21.408453       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:48:21.408504       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:48:21.414079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:48:21.414498       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:48:21.414585       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:48:21.416013       1 config.go:200] "Starting service config controller"
	I1217 11:48:21.416058       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:48:21.416037       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:48:21.416105       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:48:21.416158       1 config.go:309] "Starting node config controller"
	I1217 11:48:21.416175       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:48:21.416183       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:48:21.416193       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:48:21.416217       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:48:21.516331       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:48:21.516403       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 11:48:21.516452       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2998ec06acced2c8225eaab508140d0b2fb0b8b67c04c2f395375c463dfdf085] <==
	E1217 11:48:12.466993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:48:12.467023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:48:12.467115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 11:48:12.467217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 11:48:12.467285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 11:48:12.467566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:48:12.467298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:48:12.467351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 11:48:12.467653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 11:48:12.468182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 11:48:12.468573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 11:48:13.274781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 11:48:13.350344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 11:48:13.551073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 11:48:13.572525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 11:48:13.586974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:48:13.595178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 11:48:13.605525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 11:48:13.649828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 11:48:13.663120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 11:48:13.729015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 11:48:13.774420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:48:13.813293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:48:14.020829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1217 11:48:16.263152       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:48:39 pause-016656 kubelet[1361]: E1217 11:48:39.420582    1361 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:39 pause-016656 kubelet[1361]: E1217 11:48:39.420605    1361 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:39 pause-016656 kubelet[1361]: W1217 11:48:39.436957    1361 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 11:48:39 pause-016656 kubelet[1361]: W1217 11:48:39.624282    1361 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 11:48:39 pause-016656 kubelet[1361]: W1217 11:48:39.844938    1361 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 11:48:40 pause-016656 kubelet[1361]: W1217 11:48:40.255087    1361 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 11:48:40 pause-016656 kubelet[1361]: E1217 11:48:40.325822    1361 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:40 pause-016656 kubelet[1361]: E1217 11:48:40.325897    1361 kubelet.go:2997] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:40 pause-016656 kubelet[1361]: E1217 11:48:40.421788    1361 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 11:48:40 pause-016656 kubelet[1361]: E1217 11:48:40.421854    1361 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:40 pause-016656 kubelet[1361]: E1217 11:48:40.421870    1361 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:40 pause-016656 kubelet[1361]: W1217 11:48:40.967601    1361 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.335815    1361 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.336090    1361 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.336196    1361 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.336224    1361 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.423211    1361 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.423264    1361 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:41 pause-016656 kubelet[1361]: E1217 11:48:41.423277    1361 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 11:48:45 pause-016656 kubelet[1361]: E1217 11:48:45.328119    1361 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Dec 17 11:48:46 pause-016656 kubelet[1361]: I1217 11:48:46.141796    1361 setters.go:543] "Node became not ready" node="pause-016656" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-17T11:48:46Z","lastTransitionTime":"2025-12-17T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"}
	Dec 17 11:48:58 pause-016656 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:48:58 pause-016656 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:48:58 pause-016656 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:48:58 pause-016656 systemd[1]: kubelet.service: Consumed 1.783s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-016656 -n pause-016656
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-016656 -n pause-016656: exit status 2 (331.942364ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-016656 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-401285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-401285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (249.082694ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:52:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-401285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-401285 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-401285 describe deploy/metrics-server -n kube-system: exit status 1 (59.570468ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-401285 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-401285
helpers_test.go:244: (dbg) docker inspect old-k8s-version-401285:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0",
	        "Created": "2025-12-17T11:51:14.16613837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1918147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:51:14.207683271Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/hosts",
	        "LogPath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0-json.log",
	        "Name": "/old-k8s-version-401285",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-401285:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-401285",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0",
	                "LowerDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-401285",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-401285/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-401285",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-401285",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-401285",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f3da34977bc2a2628caef03e04fabed0e7f27722780c48d97e443453ee1912f8",
	            "SandboxKey": "/var/run/docker/netns/f3da34977bc2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34591"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34592"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34595"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34593"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34594"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-401285": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "28c236790a3f61d93a940c5e5d3e7f4dd4932eb2cb6dabba52c6ea762e486410",
	                    "EndpointID": "084e6edbd74c598367a46284aab801f1420121a8975c458b7fdff7be50bb3f5a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f2:ad:0f:02:6b:68",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-401285",
	                        "2cc7fce2754b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-401285 -n old-k8s-version-401285
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-401285 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-401285 logs -n 25: (1.197283235s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-213935 sudo docker system info                                                                                                                                                                                                      │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo containerd config dump                                                                                                                                                                                                  │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo crio config                                                                                                                                                                                                             │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ delete  │ -p cilium-213935                                                                                                                                                                                                                              │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p force-systemd-flag-881315 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-881315 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ ssh     │ force-systemd-flag-881315 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-881315 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p force-systemd-flag-881315                                                                                                                                                                                                                  │ force-systemd-flag-881315 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p cert-options-714247 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:51 UTC │
	│ ssh     │ cert-options-714247 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ ssh     │ -p cert-options-714247 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ delete  │ -p cert-options-714247                                                                                                                                                                                                                        │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ start   │ -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-401285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:51:08
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:51:08.261481 1917124 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:51:08.263049 1917124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:51:08.263065 1917124 out.go:374] Setting ErrFile to fd 2...
	I1217 11:51:08.263070 1917124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:51:08.263304 1917124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:51:08.263836 1917124 out.go:368] Setting JSON to false
	I1217 11:51:08.265026 1917124 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20013,"bootTime":1765952255,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:51:08.265149 1917124 start.go:143] virtualization: kvm guest
	I1217 11:51:08.267304 1917124 out.go:179] * [old-k8s-version-401285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:51:08.269034 1917124 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:51:08.269056 1917124 notify.go:221] Checking for updates...
	I1217 11:51:08.271695 1917124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:51:08.272954 1917124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:51:08.274087 1917124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:51:08.275143 1917124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:51:08.276322 1917124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:51:04.543716 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:04.544213 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:04.544293 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:04.544363 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:04.575125 1888817 cri.go:89] found id: "3b48a87718d70abecb3f3e9c6a83b50370825422d864ff29bd4d4730cc8aebdb"
	I1217 11:51:04.575154 1888817 cri.go:89] found id: ""
	I1217 11:51:04.575164 1888817 logs.go:282] 1 containers: [3b48a87718d70abecb3f3e9c6a83b50370825422d864ff29bd4d4730cc8aebdb]
	I1217 11:51:04.575234 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:04.580191 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:04.580273 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:04.610086 1888817 cri.go:89] found id: ""
	I1217 11:51:04.610121 1888817 logs.go:282] 0 containers: []
	W1217 11:51:04.610132 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:04.610140 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:04.610202 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:04.642627 1888817 cri.go:89] found id: ""
	I1217 11:51:04.642654 1888817 logs.go:282] 0 containers: []
	W1217 11:51:04.642665 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:04.642673 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:04.642737 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:04.675581 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:04.675613 1888817 cri.go:89] found id: ""
	I1217 11:51:04.675625 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:04.675782 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:04.680744 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:04.680812 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:04.710866 1888817 cri.go:89] found id: ""
	I1217 11:51:04.710892 1888817 logs.go:282] 0 containers: []
	W1217 11:51:04.710900 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:04.710908 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:04.710958 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:04.740784 1888817 cri.go:89] found id: "6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d"
	I1217 11:51:04.740809 1888817 cri.go:89] found id: ""
	I1217 11:51:04.740817 1888817 logs.go:282] 1 containers: [6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d]
	I1217 11:51:04.740869 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:04.745063 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:04.745130 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:04.773888 1888817 cri.go:89] found id: ""
	I1217 11:51:04.773920 1888817 logs.go:282] 0 containers: []
	W1217 11:51:04.773931 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:04.773938 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:04.773999 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:04.805597 1888817 cri.go:89] found id: ""
	I1217 11:51:04.805626 1888817 logs.go:282] 0 containers: []
	W1217 11:51:04.805646 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:04.805660 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:04.805677 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:04.839610 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:04.839639 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:04.910019 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:04.910056 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:04.926777 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:04.926807 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:04.984751 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:04.984776 1888817 logs.go:123] Gathering logs for kube-apiserver [3b48a87718d70abecb3f3e9c6a83b50370825422d864ff29bd4d4730cc8aebdb] ...
	I1217 11:51:04.984793 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b48a87718d70abecb3f3e9c6a83b50370825422d864ff29bd4d4730cc8aebdb"
	I1217 11:51:05.023892 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:05.023931 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:05.054460 1888817 logs.go:123] Gathering logs for kube-controller-manager [6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d] ...
	I1217 11:51:05.054497 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d"
	I1217 11:51:05.086056 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:05.086097 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:07.640606 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:08.277875 1917124 config.go:182] Loaded profile config "cert-expiration-067996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:51:08.277964 1917124 config.go:182] Loaded profile config "kubernetes-upgrade-556754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:51:08.278098 1917124 config.go:182] Loaded profile config "stopped-upgrade-287611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 11:51:08.278212 1917124 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:51:08.304165 1917124 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:51:08.304349 1917124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:51:08.368999 1917124 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 11:51:08.358562133 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:51:08.369110 1917124 docker.go:319] overlay module found
	I1217 11:51:08.370795 1917124 out.go:179] * Using the docker driver based on user configuration
	I1217 11:51:08.372322 1917124 start.go:309] selected driver: docker
	I1217 11:51:08.372345 1917124 start.go:927] validating driver "docker" against <nil>
	I1217 11:51:08.372392 1917124 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:51:08.373225 1917124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:51:08.428642 1917124 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 11:51:08.41908864 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:51:08.428871 1917124 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:51:08.429147 1917124 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:51:08.430630 1917124 out.go:179] * Using Docker driver with root privileges
	I1217 11:51:08.431690 1917124 cni.go:84] Creating CNI manager for ""
	I1217 11:51:08.431770 1917124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:51:08.431783 1917124 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:51:08.431857 1917124 start.go:353] cluster config:
	{Name:old-k8s-version-401285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-401285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:51:08.433208 1917124 out.go:179] * Starting "old-k8s-version-401285" primary control-plane node in "old-k8s-version-401285" cluster
	I1217 11:51:08.434169 1917124 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:51:08.435193 1917124 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:51:08.436107 1917124 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 11:51:08.436144 1917124 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 11:51:08.436161 1917124 cache.go:65] Caching tarball of preloaded images
	I1217 11:51:08.436223 1917124 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:51:08.436249 1917124 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:51:08.436258 1917124 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1217 11:51:08.436357 1917124 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/config.json ...
	I1217 11:51:08.436382 1917124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/config.json: {Name:mkc020eda7ff5198662e498c979efa075bbe3ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:51:08.458248 1917124 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:51:08.458274 1917124 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:51:08.458302 1917124 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:51:08.458335 1917124 start.go:360] acquireMachinesLock for old-k8s-version-401285: {Name:mk169925cec22a7c7cc4f728eb121b2976a57ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:51:08.458460 1917124 start.go:364] duration metric: took 96.284µs to acquireMachinesLock for "old-k8s-version-401285"
	I1217 11:51:08.458493 1917124 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-401285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-401285 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:51:08.458605 1917124 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:51:07.369666 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:07.711759 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:54160->192.168.85.2:8443: read: connection reset by peer
	I1217 11:51:07.711849 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:07.711918 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:07.756641 1894629 cri.go:89] found id: "2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:07.756680 1894629 cri.go:89] found id: "a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114"
	I1217 11:51:07.756685 1894629 cri.go:89] found id: ""
	I1217 11:51:07.756696 1894629 logs.go:282] 2 containers: [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16 a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114]
	I1217 11:51:07.756746 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:07.760681 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:07.764146 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:07.764208 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:07.803957 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:07.803985 1894629 cri.go:89] found id: ""
	I1217 11:51:07.803994 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:07.804056 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:07.807951 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:07.808012 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:07.852781 1894629 cri.go:89] found id: ""
	I1217 11:51:07.852810 1894629 logs.go:282] 0 containers: []
	W1217 11:51:07.852821 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:07.852829 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:07.852888 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:07.891457 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:07.891491 1894629 cri.go:89] found id: ""
	I1217 11:51:07.891500 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:07.891574 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:07.895719 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:07.895784 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:07.941252 1894629 cri.go:89] found id: ""
	I1217 11:51:07.941285 1894629 logs.go:282] 0 containers: []
	W1217 11:51:07.941296 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:07.941303 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:07.941363 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:07.978257 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:07.978277 1894629 cri.go:89] found id: "f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf"
	I1217 11:51:07.978281 1894629 cri.go:89] found id: ""
	I1217 11:51:07.978289 1894629 logs.go:282] 2 containers: [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6 f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf]
	I1217 11:51:07.978348 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:07.982611 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:07.986386 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:07.986457 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:08.029593 1894629 cri.go:89] found id: ""
	I1217 11:51:08.029629 1894629 logs.go:282] 0 containers: []
	W1217 11:51:08.029641 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:08.029648 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:08.029703 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:08.076158 1894629 cri.go:89] found id: ""
	I1217 11:51:08.076189 1894629 logs.go:282] 0 containers: []
	W1217 11:51:08.076201 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:08.076215 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:08.076231 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:08.148904 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:08.148939 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:08.170542 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:08.170590 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:08.236502 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:08.236525 1894629 logs.go:123] Gathering logs for kube-apiserver [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16] ...
	I1217 11:51:08.236550 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:08.277846 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:08.277872 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:08.344145 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:08.344198 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:08.384312 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:08.384338 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:08.423844 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:08.423890 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:08.468512 1894629 logs.go:123] Gathering logs for kube-apiserver [a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114] ...
	I1217 11:51:08.468568 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114"
	W1217 11:51:08.506805 1894629 logs.go:130] failed kube-apiserver [a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:51:08.503724    2711 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114\": container with ID starting with a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114 not found: ID does not exist" containerID="a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114"
	time="2025-12-17T11:51:08Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114\": container with ID starting with a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1217 11:51:08.503724    2711 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114\": container with ID starting with a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114 not found: ID does not exist" containerID="a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114"
	time="2025-12-17T11:51:08Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114\": container with ID starting with a8b2fc825b7d3c0b1ea9c89c8426a65a45ccc591a778ea1e4107bddc1a2c6114 not found: ID does not exist"
	
	** /stderr **
	I1217 11:51:08.506842 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:08.506855 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:08.555832 1894629 logs.go:123] Gathering logs for kube-controller-manager [f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf] ...
	I1217 11:51:08.555870 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf"
	I1217 11:51:08.460659 1917124 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:51:08.460999 1917124 start.go:159] libmachine.API.Create for "old-k8s-version-401285" (driver="docker")
	I1217 11:51:08.461040 1917124 client.go:173] LocalClient.Create starting
	I1217 11:51:08.461315 1917124 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem
	I1217 11:51:08.461388 1917124 main.go:143] libmachine: Decoding PEM data...
	I1217 11:51:08.461412 1917124 main.go:143] libmachine: Parsing certificate...
	I1217 11:51:08.461497 1917124 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem
	I1217 11:51:08.461545 1917124 main.go:143] libmachine: Decoding PEM data...
	I1217 11:51:08.461575 1917124 main.go:143] libmachine: Parsing certificate...
	I1217 11:51:08.462015 1917124 cli_runner.go:164] Run: docker network inspect old-k8s-version-401285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:51:08.481931 1917124 cli_runner.go:211] docker network inspect old-k8s-version-401285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:51:08.482031 1917124 network_create.go:284] running [docker network inspect old-k8s-version-401285] to gather additional debugging logs...
	I1217 11:51:08.482057 1917124 cli_runner.go:164] Run: docker network inspect old-k8s-version-401285
	W1217 11:51:08.500357 1917124 cli_runner.go:211] docker network inspect old-k8s-version-401285 returned with exit code 1
	I1217 11:51:08.500394 1917124 network_create.go:287] error running [docker network inspect old-k8s-version-401285]: docker network inspect old-k8s-version-401285: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-401285 not found
	I1217 11:51:08.500411 1917124 network_create.go:289] output of [docker network inspect old-k8s-version-401285]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-401285 not found
	
	** /stderr **
	I1217 11:51:08.500586 1917124 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:51:08.521113 1917124 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3d92c06bf7e1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:dc:f5:1a:95:c6} reservation:<nil>}
	I1217 11:51:08.522220 1917124 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e34a3db6b97 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:b3:69:9a:9a:9f} reservation:<nil>}
	I1217 11:51:08.523373 1917124 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d8460370d724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:bb:68:9a:9d:ac} reservation:<nil>}
	I1217 11:51:08.524105 1917124 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cb66266d333d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:79:2f:64:02:df} reservation:<nil>}
	I1217 11:51:08.524840 1917124 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0f9b0e663d9b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:b0:7e:78:0f:69} reservation:<nil>}
	I1217 11:51:08.526049 1917124 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020e43f0}
	I1217 11:51:08.526087 1917124 network_create.go:124] attempt to create docker network old-k8s-version-401285 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 11:51:08.526168 1917124 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-401285 old-k8s-version-401285
	I1217 11:51:08.579819 1917124 network_create.go:108] docker network old-k8s-version-401285 192.168.94.0/24 created
	I1217 11:51:08.579873 1917124 kic.go:121] calculated static IP "192.168.94.2" for the "old-k8s-version-401285" container
	I1217 11:51:08.579977 1917124 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:51:08.600162 1917124 cli_runner.go:164] Run: docker volume create old-k8s-version-401285 --label name.minikube.sigs.k8s.io=old-k8s-version-401285 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:51:08.619763 1917124 oci.go:103] Successfully created a docker volume old-k8s-version-401285
	I1217 11:51:08.619852 1917124 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-401285-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-401285 --entrypoint /usr/bin/test -v old-k8s-version-401285:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:51:09.026962 1917124 oci.go:107] Successfully prepared a docker volume old-k8s-version-401285
	I1217 11:51:09.027036 1917124 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 11:51:09.027049 1917124 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:51:09.027125 1917124 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-401285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:51:12.641880 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 11:51:12.641946 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:12.642008 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:12.670184 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:12.670210 1888817 cri.go:89] found id: "3b48a87718d70abecb3f3e9c6a83b50370825422d864ff29bd4d4730cc8aebdb"
	I1217 11:51:12.670216 1888817 cri.go:89] found id: ""
	I1217 11:51:12.670227 1888817 logs.go:282] 2 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985 3b48a87718d70abecb3f3e9c6a83b50370825422d864ff29bd4d4730cc8aebdb]
	I1217 11:51:12.670289 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:12.674584 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:12.678521 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:12.678608 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:12.706162 1888817 cri.go:89] found id: ""
	I1217 11:51:12.706188 1888817 logs.go:282] 0 containers: []
	W1217 11:51:12.706197 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:12.706203 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:12.706251 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:12.734949 1888817 cri.go:89] found id: ""
	I1217 11:51:12.734975 1888817 logs.go:282] 0 containers: []
	W1217 11:51:12.734983 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:12.734989 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:12.735045 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:12.763622 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:12.763659 1888817 cri.go:89] found id: ""
	I1217 11:51:12.763671 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:12.763738 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:12.767923 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:12.768006 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:12.795887 1888817 cri.go:89] found id: ""
	I1217 11:51:12.795912 1888817 logs.go:282] 0 containers: []
	W1217 11:51:12.795920 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:12.795926 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:12.795986 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:12.823992 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:12.824016 1888817 cri.go:89] found id: "6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d"
	I1217 11:51:12.824020 1888817 cri.go:89] found id: ""
	I1217 11:51:12.824028 1888817 logs.go:282] 2 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96 6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d]
	I1217 11:51:12.824083 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:12.828299 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:12.832344 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:12.832423 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:12.859559 1888817 cri.go:89] found id: ""
	I1217 11:51:12.859588 1888817 logs.go:282] 0 containers: []
	W1217 11:51:12.859598 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:12.859617 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:12.859685 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:12.887899 1888817 cri.go:89] found id: ""
	I1217 11:51:12.887932 1888817 logs.go:282] 0 containers: []
	W1217 11:51:12.887944 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:12.887965 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:12.887981 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:12.915678 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:12.915706 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:12.943415 1888817 logs.go:123] Gathering logs for kube-controller-manager [6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d] ...
	I1217 11:51:12.943440 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d"
	I1217 11:51:12.969417 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:12.969442 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:12.986374 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:12.986407 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 11:51:11.095430 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:11.095912 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:51:11.095962 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:11.096088 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:11.133418 1894629 cri.go:89] found id: "2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:11.133444 1894629 cri.go:89] found id: ""
	I1217 11:51:11.133455 1894629 logs.go:282] 1 containers: [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16]
	I1217 11:51:11.133523 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:11.137795 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:11.137865 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:11.176212 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:11.176237 1894629 cri.go:89] found id: ""
	I1217 11:51:11.176248 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:11.176308 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:11.180286 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:11.180351 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:11.219336 1894629 cri.go:89] found id: ""
	I1217 11:51:11.219361 1894629 logs.go:282] 0 containers: []
	W1217 11:51:11.219370 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:11.219378 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:11.219442 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:11.260843 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:11.260872 1894629 cri.go:89] found id: ""
	I1217 11:51:11.260885 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:11.260953 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:11.264881 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:11.264967 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:11.301732 1894629 cri.go:89] found id: ""
	I1217 11:51:11.301760 1894629 logs.go:282] 0 containers: []
	W1217 11:51:11.301768 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:11.301788 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:11.301841 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:11.341458 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:11.341488 1894629 cri.go:89] found id: "f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf"
	I1217 11:51:11.341495 1894629 cri.go:89] found id: ""
	I1217 11:51:11.341506 1894629 logs.go:282] 2 containers: [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6 f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf]
	I1217 11:51:11.341594 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:11.346159 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:11.350328 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:11.350446 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:11.390468 1894629 cri.go:89] found id: ""
	I1217 11:51:11.390501 1894629 logs.go:282] 0 containers: []
	W1217 11:51:11.390514 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:11.390523 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:11.390601 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:11.429681 1894629 cri.go:89] found id: ""
	I1217 11:51:11.429709 1894629 logs.go:282] 0 containers: []
	W1217 11:51:11.429719 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:11.429761 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:11.429780 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:11.511437 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:11.511484 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:11.580845 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:11.580868 1894629 logs.go:123] Gathering logs for kube-apiserver [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16] ...
	I1217 11:51:11.580887 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:11.624490 1894629 logs.go:123] Gathering logs for kube-controller-manager [f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf] ...
	I1217 11:51:11.624527 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf"
	I1217 11:51:11.662788 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:11.662817 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:11.682007 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:11.682041 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:11.722039 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:11.722073 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:11.787287 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:11.787335 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:11.824443 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:11.824474 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:11.859757 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:11.859798 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:14.400616 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:14.401074 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:51:14.401147 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:14.401210 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:14.439154 1894629 cri.go:89] found id: "2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:14.439174 1894629 cri.go:89] found id: ""
	I1217 11:51:14.439185 1894629 logs.go:282] 1 containers: [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16]
	I1217 11:51:14.439243 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:14.443492 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:14.443580 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:14.486729 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:14.486755 1894629 cri.go:89] found id: ""
	I1217 11:51:14.486766 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:14.486829 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:14.491407 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:14.491484 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:14.531780 1894629 cri.go:89] found id: ""
	I1217 11:51:14.531807 1894629 logs.go:282] 0 containers: []
	W1217 11:51:14.531815 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:14.531822 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:14.531868 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:14.573350 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:14.573371 1894629 cri.go:89] found id: ""
	I1217 11:51:14.573380 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:14.573440 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:14.577731 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:14.577804 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:14.621362 1894629 cri.go:89] found id: ""
	I1217 11:51:14.621391 1894629 logs.go:282] 0 containers: []
	W1217 11:51:14.621409 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:14.621418 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:14.621479 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:14.659496 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:14.659524 1894629 cri.go:89] found id: "f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf"
	I1217 11:51:14.659555 1894629 cri.go:89] found id: ""
	I1217 11:51:14.659574 1894629 logs.go:282] 2 containers: [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6 f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf]
	I1217 11:51:14.659644 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:14.664030 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:14.668896 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:14.668966 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:14.717639 1894629 cri.go:89] found id: ""
	I1217 11:51:14.717680 1894629 logs.go:282] 0 containers: []
	W1217 11:51:14.717691 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:14.717698 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:14.717758 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:14.764447 1894629 cri.go:89] found id: ""
	I1217 11:51:14.764480 1894629 logs.go:282] 0 containers: []
	W1217 11:51:14.764491 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:14.764511 1894629 logs.go:123] Gathering logs for kube-apiserver [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16] ...
	I1217 11:51:14.764528 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:14.820641 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:14.820672 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:14.877707 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:14.877939 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:14.964677 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:14.964714 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:15.032217 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:15.032252 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:15.072990 1894629 logs.go:123] Gathering logs for kube-controller-manager [f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf] ...
	I1217 11:51:15.073019 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf"
	I1217 11:51:15.111422 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:15.111450 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:15.148830 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:15.148876 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:15.194655 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:15.194690 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:15.214252 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:15.214288 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:15.279472 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:14.092886 1917124 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-401285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (5.065692234s)
	I1217 11:51:14.092948 1917124 kic.go:203] duration metric: took 5.06589005s to extract preloaded images to volume ...
	W1217 11:51:14.093046 1917124 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 11:51:14.093084 1917124 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 11:51:14.093124 1917124 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:51:14.149089 1917124 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-401285 --name old-k8s-version-401285 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-401285 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-401285 --network old-k8s-version-401285 --ip 192.168.94.2 --volume old-k8s-version-401285:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:51:14.437498 1917124 cli_runner.go:164] Run: docker container inspect old-k8s-version-401285 --format={{.State.Running}}
	I1217 11:51:14.457223 1917124 cli_runner.go:164] Run: docker container inspect old-k8s-version-401285 --format={{.State.Status}}
	I1217 11:51:14.479479 1917124 cli_runner.go:164] Run: docker exec old-k8s-version-401285 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:51:14.526618 1917124 oci.go:144] the created container "old-k8s-version-401285" has a running status.
	I1217 11:51:14.526649 1917124 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/old-k8s-version-401285/id_rsa...
	I1217 11:51:14.676507 1917124 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/old-k8s-version-401285/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 11:51:14.714853 1917124 cli_runner.go:164] Run: docker container inspect old-k8s-version-401285 --format={{.State.Status}}
	I1217 11:51:14.736024 1917124 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:51:14.736057 1917124 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-401285 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:51:14.796633 1917124 cli_runner.go:164] Run: docker container inspect old-k8s-version-401285 --format={{.State.Status}}
	I1217 11:51:14.819704 1917124 machine.go:94] provisionDockerMachine start ...
	I1217 11:51:14.819815 1917124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:51:14.845645 1917124 main.go:143] libmachine: Using SSH client type: native
	I1217 11:51:14.846021 1917124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1217 11:51:14.846047 1917124 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:51:14.986729 1917124 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-401285
	
	I1217 11:51:14.986768 1917124 ubuntu.go:182] provisioning hostname "old-k8s-version-401285"
	I1217 11:51:14.986843 1917124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:51:15.008867 1917124 main.go:143] libmachine: Using SSH client type: native
	I1217 11:51:15.009176 1917124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1217 11:51:15.009197 1917124 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-401285 && echo "old-k8s-version-401285" | sudo tee /etc/hostname
	I1217 11:51:15.150922 1917124 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-401285
	
	I1217 11:51:15.151019 1917124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:51:15.172762 1917124 main.go:143] libmachine: Using SSH client type: native
	I1217 11:51:15.173109 1917124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1217 11:51:15.173148 1917124 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-401285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-401285/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-401285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:51:15.306363 1917124 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:51:15.306393 1917124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:51:15.306424 1917124 ubuntu.go:190] setting up certificates
	I1217 11:51:15.306437 1917124 provision.go:84] configureAuth start
	I1217 11:51:15.306490 1917124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-401285
	I1217 11:51:15.325938 1917124 provision.go:143] copyHostCerts
	I1217 11:51:15.326006 1917124 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:51:15.326018 1917124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:51:15.326092 1917124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:51:15.326200 1917124 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:51:15.326210 1917124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:51:15.326239 1917124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:51:15.326339 1917124 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:51:15.326348 1917124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:51:15.326372 1917124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:51:15.326432 1917124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-401285 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-401285]
	I1217 11:51:15.425004 1917124 provision.go:177] copyRemoteCerts
	I1217 11:51:15.425066 1917124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:51:15.425103 1917124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:51:15.443919 1917124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/old-k8s-version-401285/id_rsa Username:docker}
	I1217 11:51:15.539762 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:51:15.560026 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1217 11:51:15.579184 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:51:15.598400 1917124 provision.go:87] duration metric: took 291.94664ms to configureAuth
	I1217 11:51:15.598433 1917124 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:51:15.598653 1917124 config.go:182] Loaded profile config "old-k8s-version-401285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 11:51:15.598795 1917124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:51:15.618841 1917124 main.go:143] libmachine: Using SSH client type: native
	I1217 11:51:15.619057 1917124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1217 11:51:15.619073 1917124 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:51:15.906126 1917124 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:51:15.906154 1917124 machine.go:97] duration metric: took 1.086425777s to provisionDockerMachine
	I1217 11:51:15.906169 1917124 client.go:176] duration metric: took 7.445045552s to LocalClient.Create
	I1217 11:51:15.906192 1917124 start.go:167] duration metric: took 7.44519443s to libmachine.API.Create "old-k8s-version-401285"
	I1217 11:51:15.906202 1917124 start.go:293] postStartSetup for "old-k8s-version-401285" (driver="docker")
	I1217 11:51:15.906214 1917124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:51:15.906269 1917124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:51:15.906307 1917124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:51:15.925484 1917124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/old-k8s-version-401285/id_rsa Username:docker}
	I1217 11:51:16.024094 1917124 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:51:16.027925 1917124 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:51:16.027949 1917124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:51:16.027961 1917124 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:51:16.028021 1917124 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:51:16.028102 1917124 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:51:16.028215 1917124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:51:16.036503 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:51:16.058360 1917124 start.go:296] duration metric: took 152.139542ms for postStartSetup
	I1217 11:51:16.058789 1917124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-401285
	I1217 11:51:16.077675 1917124 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/config.json ...
	I1217 11:51:16.077935 1917124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:51:16.077984 1917124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:51:16.096362 1917124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/old-k8s-version-401285/id_rsa Username:docker}
	I1217 11:51:16.189213 1917124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:51:16.194776 1917124 start.go:128] duration metric: took 7.736151756s to createHost
	I1217 11:51:16.194805 1917124 start.go:83] releasing machines lock for "old-k8s-version-401285", held for 7.736328881s
	I1217 11:51:16.194886 1917124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-401285
	I1217 11:51:16.214033 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:51:16.214091 1917124 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:51:16.214100 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:51:16.214122 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:51:16.214147 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:51:16.214177 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:51:16.214221 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:51:16.214281 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:51:16.214341 1917124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:51:16.232822 1917124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/old-k8s-version-401285/id_rsa Username:docker}
	I1217 11:51:16.341393 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:51:16.362809 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:51:16.382833 1917124 ssh_runner.go:195] Run: openssl version
	I1217 11:51:16.389715 1917124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:51:16.398631 1917124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:51:16.407049 1917124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:51:16.411360 1917124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:51:16.411452 1917124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:51:16.448017 1917124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:51:16.456315 1917124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16729412.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:51:16.465027 1917124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:51:16.473482 1917124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:51:16.481708 1917124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:51:16.485914 1917124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:51:16.485983 1917124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:51:16.523178 1917124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:51:16.531962 1917124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:51:16.540129 1917124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:51:16.548331 1917124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:51:16.556689 1917124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:51:16.560795 1917124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:51:16.560869 1917124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:51:16.596939 1917124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:51:16.605727 1917124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1672941.pem /etc/ssl/certs/51391683.0
	I1217 11:51:16.614514 1917124 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:51:16.618640 1917124 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:51:16.622715 1917124 ssh_runner.go:195] Run: cat /version.json
	I1217 11:51:16.622801 1917124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:51:16.680161 1917124 ssh_runner.go:195] Run: systemctl --version
	I1217 11:51:16.687042 1917124 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:51:16.723234 1917124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:51:16.728102 1917124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:51:16.728174 1917124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:51:16.755121 1917124 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 11:51:16.755147 1917124 start.go:496] detecting cgroup driver to use...
	I1217 11:51:16.755177 1917124 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:51:16.755228 1917124 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:51:16.772237 1917124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:51:16.785350 1917124 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:51:16.785408 1917124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:51:16.802860 1917124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:51:16.821037 1917124 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:51:16.905232 1917124 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:51:16.993500 1917124 docker.go:234] disabling docker service ...
	I1217 11:51:16.993628 1917124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:51:17.012571 1917124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:51:17.025808 1917124 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:51:17.108108 1917124 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:51:17.190930 1917124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:51:17.205433 1917124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:51:17.220841 1917124 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1217 11:51:17.220903 1917124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:51:17.231616 1917124 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:51:17.231688 1917124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:51:17.240948 1917124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:51:17.250041 1917124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:51:17.259123 1917124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:51:17.267364 1917124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:51:17.276842 1917124 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:51:17.290932 1917124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:51:17.300083 1917124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:51:17.307520 1917124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:51:17.315154 1917124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:51:17.402768 1917124 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:51:17.548211 1917124 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:51:17.548277 1917124 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:51:17.552438 1917124 start.go:564] Will wait 60s for crictl version
	I1217 11:51:17.552490 1917124 ssh_runner.go:195] Run: which crictl
	I1217 11:51:17.556430 1917124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:51:17.581835 1917124 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:51:17.581919 1917124 ssh_runner.go:195] Run: crio --version
	I1217 11:51:17.611697 1917124 ssh_runner.go:195] Run: crio --version
	I1217 11:51:17.643373 1917124 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1217 11:51:17.644403 1917124 cli_runner.go:164] Run: docker network inspect old-k8s-version-401285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:51:17.662552 1917124 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 11:51:17.666900 1917124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:51:17.677721 1917124 kubeadm.go:884] updating cluster {Name:old-k8s-version-401285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-401285 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:51:17.677845 1917124 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 11:51:17.677895 1917124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:51:17.710292 1917124 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:51:17.710324 1917124 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:51:17.710381 1917124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:51:17.736772 1917124 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:51:17.736797 1917124 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:51:17.736805 1917124 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1217 11:51:17.736889 1917124 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-401285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-401285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:51:17.736951 1917124 ssh_runner.go:195] Run: crio config
	I1217 11:51:17.788118 1917124 cni.go:84] Creating CNI manager for ""
	I1217 11:51:17.788146 1917124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:51:17.788171 1917124 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:51:17.788203 1917124 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-401285 NodeName:old-k8s-version-401285 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:51:17.788380 1917124 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-401285"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:51:17.788459 1917124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1217 11:51:17.797117 1917124 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:51:17.797189 1917124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:51:17.805347 1917124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 11:51:17.820108 1917124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:51:17.836307 1917124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1217 11:51:17.849820 1917124 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:51:17.854003 1917124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:51:17.865443 1917124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:51:17.954885 1917124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:51:17.977368 1917124 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285 for IP: 192.168.94.2
	I1217 11:51:17.977389 1917124 certs.go:195] generating shared ca certs ...
	I1217 11:51:17.977406 1917124 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:51:17.977608 1917124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:51:17.977676 1917124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:51:17.977691 1917124 certs.go:257] generating profile certs ...
	I1217 11:51:17.977766 1917124 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/client.key
	I1217 11:51:17.977794 1917124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/client.crt with IP's: []
	I1217 11:51:18.060771 1917124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/client.crt ...
	I1217 11:51:18.060800 1917124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/client.crt: {Name:mkfe023534ef865321df539a215520279930a7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:51:18.060975 1917124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/client.key ...
	I1217 11:51:18.060989 1917124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/client.key: {Name:mk9427909085018cb3e388c705cf74cb6323cdff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:51:18.061071 1917124 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.key.278220ed
	I1217 11:51:18.061087 1917124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.crt.278220ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 11:51:18.114617 1917124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.crt.278220ed ...
	I1217 11:51:18.114649 1917124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.crt.278220ed: {Name:mkdb26c78f95c90bed312681366f2ee7f04026a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:51:18.114844 1917124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.key.278220ed ...
	I1217 11:51:18.114867 1917124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.key.278220ed: {Name:mka3aabee32c2b0edfd3811ef554feef14a4294c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:51:18.114947 1917124 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.crt.278220ed -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.crt
	I1217 11:51:18.115037 1917124 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.key.278220ed -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.key
	I1217 11:51:18.115099 1917124 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/proxy-client.key
	I1217 11:51:18.115114 1917124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/proxy-client.crt with IP's: []
	I1217 11:51:18.313160 1917124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/proxy-client.crt ...
	I1217 11:51:18.313197 1917124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/proxy-client.crt: {Name:mkcd555724f8239191a8255a3272c4d83d3e7dd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:51:18.313376 1917124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/proxy-client.key ...
	I1217 11:51:18.313391 1917124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/proxy-client.key: {Name:mk3472a15470aac71c9ca532013064007f6f4620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:51:18.313615 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:51:18.313658 1917124 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:51:18.313669 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:51:18.313692 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:51:18.313724 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:51:18.313751 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:51:18.313792 1917124 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:51:18.314465 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:51:18.333046 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:51:18.350754 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:51:18.369991 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:51:18.389657 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 11:51:18.407932 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:51:18.426194 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:51:18.444386 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:51:18.463771 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:51:18.482923 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:51:18.501521 1917124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:51:18.521307 1917124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:51:18.534815 1917124 ssh_runner.go:195] Run: openssl version
	I1217 11:51:18.541092 1917124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:51:18.548858 1917124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:51:18.557469 1917124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:51:18.561841 1917124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:51:18.561906 1917124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:51:18.597995 1917124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:51:18.606688 1917124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:51:18.615244 1917124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:51:18.623670 1917124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:51:18.627776 1917124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:51:18.627843 1917124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:51:18.662746 1917124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:51:18.670943 1917124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:51:18.678516 1917124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:51:18.686084 1917124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:51:18.689986 1917124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:51:18.690036 1917124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:51:18.726827 1917124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:51:18.735665 1917124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:51:18.740137 1917124 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:51:18.740202 1917124 kubeadm.go:401] StartCluster: {Name:old-k8s-version-401285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-401285 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:51:18.740297 1917124 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:51:18.740357 1917124 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:51:18.769869 1917124 cri.go:89] found id: ""
	I1217 11:51:18.769942 1917124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:51:18.778880 1917124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:51:18.787777 1917124 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:51:18.787840 1917124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:51:18.796802 1917124 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:51:18.796825 1917124 kubeadm.go:158] found existing configuration files:
	
	I1217 11:51:18.796875 1917124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:51:18.805641 1917124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:51:18.805707 1917124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:51:18.814026 1917124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:51:18.823012 1917124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:51:18.823081 1917124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:51:18.832365 1917124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:51:18.840685 1917124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:51:18.840748 1917124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:51:18.848200 1917124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:51:18.855903 1917124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:51:18.855956 1917124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:51:18.863730 1917124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:51:18.909372 1917124 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1217 11:51:18.909452 1917124 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:51:18.947157 1917124 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:51:18.947294 1917124 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:51:18.947354 1917124 kubeadm.go:319] OS: Linux
	I1217 11:51:18.947428 1917124 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:51:18.947495 1917124 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:51:18.947587 1917124 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:51:18.947654 1917124 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:51:18.947715 1917124 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:51:18.947785 1917124 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:51:18.947878 1917124 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:51:18.947962 1917124 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 11:51:19.019261 1917124 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:51:19.019417 1917124 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:51:19.019552 1917124 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1217 11:51:19.173242 1917124 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:51:17.780706 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:17.781236 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:51:17.781311 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:17.781383 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:17.821732 1894629 cri.go:89] found id: "2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:17.821755 1894629 cri.go:89] found id: ""
	I1217 11:51:17.821765 1894629 logs.go:282] 1 containers: [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16]
	I1217 11:51:17.821818 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:17.825543 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:17.825610 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:17.864478 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:17.864498 1894629 cri.go:89] found id: ""
	I1217 11:51:17.864507 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:17.864576 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:17.868755 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:17.868834 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:17.915408 1894629 cri.go:89] found id: ""
	I1217 11:51:17.915434 1894629 logs.go:282] 0 containers: []
	W1217 11:51:17.915442 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:17.915448 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:17.915493 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:17.952324 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:17.952345 1894629 cri.go:89] found id: ""
	I1217 11:51:17.952355 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:17.952422 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:17.956740 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:17.956798 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:17.998425 1894629 cri.go:89] found id: ""
	I1217 11:51:17.998455 1894629 logs.go:282] 0 containers: []
	W1217 11:51:17.998466 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:17.998475 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:17.998559 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:18.041896 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:18.041925 1894629 cri.go:89] found id: "f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf"
	I1217 11:51:18.041931 1894629 cri.go:89] found id: ""
	I1217 11:51:18.041942 1894629 logs.go:282] 2 containers: [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6 f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf]
	I1217 11:51:18.041999 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:18.046149 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:18.049825 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:18.049885 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:18.087083 1894629 cri.go:89] found id: ""
	I1217 11:51:18.087111 1894629 logs.go:282] 0 containers: []
	W1217 11:51:18.087119 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:18.087127 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:18.087192 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:18.123370 1894629 cri.go:89] found id: ""
	I1217 11:51:18.123395 1894629 logs.go:282] 0 containers: []
	W1217 11:51:18.123404 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:18.123419 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:18.123436 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:18.218138 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:18.218176 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:18.237884 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:18.237914 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:18.279933 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:18.279962 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:18.341749 1894629 logs.go:123] Gathering logs for kube-controller-manager [f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf] ...
	I1217 11:51:18.341787 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9ca1459b4473abebddab448787ebe704300bb504e03e293e88a066c7a88abdf"
	I1217 11:51:18.381144 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:18.381187 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:18.415729 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:18.415760 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:18.456230 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:18.456257 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:18.522309 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:18.522333 1894629 logs.go:123] Gathering logs for kube-apiserver [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16] ...
	I1217 11:51:18.522348 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:18.563073 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:18.563103 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:19.175471 1917124 out.go:252]   - Generating certificates and keys ...
	I1217 11:51:19.175590 1917124 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:51:19.175700 1917124 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:51:19.286099 1917124 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:51:19.382959 1917124 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:51:19.436054 1917124 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:51:19.557449 1917124 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:51:19.727173 1917124 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:51:19.727364 1917124 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-401285] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 11:51:19.846240 1917124 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:51:19.846440 1917124 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-401285] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 11:51:19.990897 1917124 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:51:20.052858 1917124 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:51:20.346793 1917124 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:51:20.346881 1917124 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:51:20.752398 1917124 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:51:20.881889 1917124 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:51:21.036040 1917124 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:51:21.274987 1917124 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:51:21.275566 1917124 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:51:21.281163 1917124 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:51:21.282962 1917124 out.go:252]   - Booting up control plane ...
	I1217 11:51:21.283069 1917124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:51:21.283155 1917124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:51:21.283983 1917124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:51:21.299891 1917124 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:51:21.300890 1917124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:51:21.300955 1917124 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:51:21.408935 1917124 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1217 11:51:23.044549 1888817 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.058076726s)
	W1217 11:51:23.044639 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1217 11:51:23.044653 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:23.044668 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:23.101116 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:23.101155 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:23.139031 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:23.139073 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:23.219050 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:23.219085 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:23.254244 1888817 logs.go:123] Gathering logs for kube-apiserver [3b48a87718d70abecb3f3e9c6a83b50370825422d864ff29bd4d4730cc8aebdb] ...
	I1217 11:51:23.254275 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b48a87718d70abecb3f3e9c6a83b50370825422d864ff29bd4d4730cc8aebdb"
	I1217 11:51:21.101690 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:21.102151 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:51:21.102217 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:21.102280 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:21.141858 1894629 cri.go:89] found id: "2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:21.141880 1894629 cri.go:89] found id: ""
	I1217 11:51:21.141890 1894629 logs.go:282] 1 containers: [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16]
	I1217 11:51:21.141956 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:21.145989 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:21.146061 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:21.183491 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:21.183514 1894629 cri.go:89] found id: ""
	I1217 11:51:21.183522 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:21.183608 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:21.187695 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:21.187772 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:21.225083 1894629 cri.go:89] found id: ""
	I1217 11:51:21.225114 1894629 logs.go:282] 0 containers: []
	W1217 11:51:21.225124 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:21.225131 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:21.225178 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:21.261942 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:21.261964 1894629 cri.go:89] found id: ""
	I1217 11:51:21.261976 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:21.262034 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:21.266196 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:21.266264 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:21.305083 1894629 cri.go:89] found id: ""
	I1217 11:51:21.305111 1894629 logs.go:282] 0 containers: []
	W1217 11:51:21.305121 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:21.305131 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:21.305198 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:21.347757 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:21.347787 1894629 cri.go:89] found id: ""
	I1217 11:51:21.347798 1894629 logs.go:282] 1 containers: [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6]
	I1217 11:51:21.347868 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:21.351727 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:21.351799 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:21.388449 1894629 cri.go:89] found id: ""
	I1217 11:51:21.388482 1894629 logs.go:282] 0 containers: []
	W1217 11:51:21.388494 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:21.388502 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:21.388579 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:21.426771 1894629 cri.go:89] found id: ""
	I1217 11:51:21.426798 1894629 logs.go:282] 0 containers: []
	W1217 11:51:21.426811 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:21.426831 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:21.426845 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:21.477007 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:21.477042 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:21.544294 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:21.544342 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:21.563332 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:21.563360 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:21.633726 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:21.633747 1894629 logs.go:123] Gathering logs for kube-apiserver [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16] ...
	I1217 11:51:21.633761 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:21.674580 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:21.674617 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:21.721473 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:21.721500 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:21.757128 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:21.757164 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:21.797832 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:21.797863 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:24.375799 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:24.376287 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:51:24.376450 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:24.376633 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:24.414260 1894629 cri.go:89] found id: "2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:24.414287 1894629 cri.go:89] found id: ""
	I1217 11:51:24.414300 1894629 logs.go:282] 1 containers: [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16]
	I1217 11:51:24.414405 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:24.418231 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:24.418308 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:24.453396 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:24.453421 1894629 cri.go:89] found id: ""
	I1217 11:51:24.453437 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:24.453484 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:24.457526 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:24.457622 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:24.493858 1894629 cri.go:89] found id: ""
	I1217 11:51:24.493883 1894629 logs.go:282] 0 containers: []
	W1217 11:51:24.493891 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:24.493897 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:24.493949 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:24.530666 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:24.530696 1894629 cri.go:89] found id: ""
	I1217 11:51:24.530709 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:24.530765 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:24.534742 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:24.534805 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:24.570390 1894629 cri.go:89] found id: ""
	I1217 11:51:24.570427 1894629 logs.go:282] 0 containers: []
	W1217 11:51:24.570439 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:24.570448 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:24.570498 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:24.607133 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:24.607156 1894629 cri.go:89] found id: ""
	I1217 11:51:24.607166 1894629 logs.go:282] 1 containers: [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6]
	I1217 11:51:24.607227 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:24.611450 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:24.611525 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:24.647785 1894629 cri.go:89] found id: ""
	I1217 11:51:24.647809 1894629 logs.go:282] 0 containers: []
	W1217 11:51:24.647817 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:24.647824 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:24.647866 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:24.683842 1894629 cri.go:89] found id: ""
	I1217 11:51:24.683870 1894629 logs.go:282] 0 containers: []
	W1217 11:51:24.683878 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:24.683892 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:24.683902 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:24.778786 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:24.778827 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:24.801370 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:24.801415 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:24.875891 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:24.875914 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:24.875930 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:24.953408 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:24.953450 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:24.996527 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:24.996571 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:25.043999 1894629 logs.go:123] Gathering logs for kube-apiserver [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16] ...
	I1217 11:51:25.044038 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:25.091551 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:25.091595 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:25.139602 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:25.139646 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:25.411096 1917124 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.002349 seconds
	I1217 11:51:25.411283 1917124 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:51:25.424418 1917124 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:51:25.945705 1917124 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:51:25.946018 1917124 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-401285 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:51:26.455646 1917124 kubeadm.go:319] [bootstrap-token] Using token: bh9d2g.broubzisftahf6gm
	I1217 11:51:26.457112 1917124 out.go:252]   - Configuring RBAC rules ...
	I1217 11:51:26.457292 1917124 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:51:26.462092 1917124 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:51:26.468300 1917124 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:51:26.471227 1917124 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:51:26.473998 1917124 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:51:26.477353 1917124 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:51:26.487208 1917124 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:51:26.678705 1917124 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:51:26.866028 1917124 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:51:26.866898 1917124 kubeadm.go:319] 
	I1217 11:51:26.866996 1917124 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:51:26.867005 1917124 kubeadm.go:319] 
	I1217 11:51:26.867103 1917124 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:51:26.867122 1917124 kubeadm.go:319] 
	I1217 11:51:26.867150 1917124 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:51:26.867202 1917124 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:51:26.867247 1917124 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:51:26.867253 1917124 kubeadm.go:319] 
	I1217 11:51:26.867297 1917124 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:51:26.867303 1917124 kubeadm.go:319] 
	I1217 11:51:26.867343 1917124 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:51:26.867360 1917124 kubeadm.go:319] 
	I1217 11:51:26.867443 1917124 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:51:26.867616 1917124 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:51:26.867691 1917124 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:51:26.867701 1917124 kubeadm.go:319] 
	I1217 11:51:26.867826 1917124 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:51:26.867935 1917124 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:51:26.867949 1917124 kubeadm.go:319] 
	I1217 11:51:26.868077 1917124 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bh9d2g.broubzisftahf6gm \
	I1217 11:51:26.868235 1917124 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:51:26.868278 1917124 kubeadm.go:319] 	--control-plane 
	I1217 11:51:26.868290 1917124 kubeadm.go:319] 
	I1217 11:51:26.868419 1917124 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:51:26.868432 1917124 kubeadm.go:319] 
	I1217 11:51:26.868554 1917124 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bh9d2g.broubzisftahf6gm \
	I1217 11:51:26.868717 1917124 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:51:26.870662 1917124 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:51:26.870795 1917124 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:51:26.870825 1917124 cni.go:84] Creating CNI manager for ""
	I1217 11:51:26.870840 1917124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:51:26.872867 1917124 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 11:51:26.874582 1917124 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:51:26.879172 1917124 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1217 11:51:26.879191 1917124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:51:26.893487 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:51:27.600052 1917124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:51:27.600132 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:27.600180 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-401285 minikube.k8s.io/updated_at=2025_12_17T11_51_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=old-k8s-version-401285 minikube.k8s.io/primary=true
	I1217 11:51:27.611380 1917124 ops.go:34] apiserver oom_adj: -16
	I1217 11:51:27.691350 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:28.191723 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:25.789455 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:27.084660 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:47858->192.168.76.2:8443: read: connection reset by peer
	I1217 11:51:27.084731 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:27.084798 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:27.119205 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:27.119227 1888817 cri.go:89] found id: ""
	I1217 11:51:27.119235 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:27.119287 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:27.123467 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:27.123525 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:27.152860 1888817 cri.go:89] found id: ""
	I1217 11:51:27.152898 1888817 logs.go:282] 0 containers: []
	W1217 11:51:27.152909 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:27.152917 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:27.152985 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:27.190245 1888817 cri.go:89] found id: ""
	I1217 11:51:27.190295 1888817 logs.go:282] 0 containers: []
	W1217 11:51:27.190306 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:27.190314 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:27.190438 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:27.225007 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:27.225035 1888817 cri.go:89] found id: ""
	I1217 11:51:27.225046 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:27.225150 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:27.229615 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:27.229694 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:27.257784 1888817 cri.go:89] found id: ""
	I1217 11:51:27.257813 1888817 logs.go:282] 0 containers: []
	W1217 11:51:27.257823 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:27.257832 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:27.257898 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:27.286097 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:27.286126 1888817 cri.go:89] found id: "6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d"
	I1217 11:51:27.286132 1888817 cri.go:89] found id: ""
	I1217 11:51:27.286143 1888817 logs.go:282] 2 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96 6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d]
	I1217 11:51:27.286206 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:27.290474 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:27.294509 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:27.294593 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:27.322933 1888817 cri.go:89] found id: ""
	I1217 11:51:27.322965 1888817 logs.go:282] 0 containers: []
	W1217 11:51:27.322977 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:27.322986 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:27.323047 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:27.351565 1888817 cri.go:89] found id: ""
	I1217 11:51:27.351591 1888817 logs.go:282] 0 containers: []
	W1217 11:51:27.351599 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:27.351622 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:27.351636 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:27.453298 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:27.453333 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:27.489410 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:27.489448 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:27.519857 1888817 logs.go:123] Gathering logs for kube-controller-manager [6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d] ...
	I1217 11:51:27.519886 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6781d9fee42eb1fcf327050e4f54755d647a84f1523c5421ff2b2b738232285d"
	I1217 11:51:27.551057 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:27.551090 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:27.606069 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:27.606100 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:27.625638 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:27.625684 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:27.695443 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:27.695487 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:27.695505 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:27.729215 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:27.729264 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:27.683626 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:27.684056 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:51:27.684121 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:27.684178 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:27.730397 1894629 cri.go:89] found id: "2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:27.730425 1894629 cri.go:89] found id: ""
	I1217 11:51:27.730438 1894629 logs.go:282] 1 containers: [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16]
	I1217 11:51:27.730498 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:27.734989 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:27.735068 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:27.784523 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:27.784558 1894629 cri.go:89] found id: ""
	I1217 11:51:27.784569 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:27.784619 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:27.788897 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:27.788979 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:27.834455 1894629 cri.go:89] found id: ""
	I1217 11:51:27.834480 1894629 logs.go:282] 0 containers: []
	W1217 11:51:27.834488 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:27.834494 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:27.834578 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:27.875868 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:27.875889 1894629 cri.go:89] found id: ""
	I1217 11:51:27.875897 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:27.875950 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:27.880231 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:27.880306 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:27.917018 1894629 cri.go:89] found id: ""
	I1217 11:51:27.917043 1894629 logs.go:282] 0 containers: []
	W1217 11:51:27.917051 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:27.917057 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:27.917113 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:27.957179 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:27.957209 1894629 cri.go:89] found id: ""
	I1217 11:51:27.957222 1894629 logs.go:282] 1 containers: [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6]
	I1217 11:51:27.957280 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:27.961734 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:27.961809 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:28.003048 1894629 cri.go:89] found id: ""
	I1217 11:51:28.003085 1894629 logs.go:282] 0 containers: []
	W1217 11:51:28.003094 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:28.003100 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:28.003164 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:28.041477 1894629 cri.go:89] found id: ""
	I1217 11:51:28.041508 1894629 logs.go:282] 0 containers: []
	W1217 11:51:28.041520 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:28.041569 1894629 logs.go:123] Gathering logs for kube-apiserver [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16] ...
	I1217 11:51:28.041592 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:28.083719 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:28.083754 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:28.153087 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:28.153124 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:28.192199 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:28.192234 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:28.237208 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:28.237249 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:28.324771 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:28.324806 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:28.345190 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:28.345224 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:28.409261 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:28.409284 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:28.409305 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:28.451679 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:28.451713 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:28.691692 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:29.192179 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:29.692234 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:30.192442 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:30.691684 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:31.191833 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:31.692196 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:32.192214 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:32.691832 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:33.192405 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:30.275358 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:30.275824 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:30.275881 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:30.275932 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:30.305011 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:30.305038 1888817 cri.go:89] found id: ""
	I1217 11:51:30.305050 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:30.305111 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:30.309102 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:30.309157 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:30.337155 1888817 cri.go:89] found id: ""
	I1217 11:51:30.337177 1888817 logs.go:282] 0 containers: []
	W1217 11:51:30.337185 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:30.337191 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:30.337236 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:30.365675 1888817 cri.go:89] found id: ""
	I1217 11:51:30.365699 1888817 logs.go:282] 0 containers: []
	W1217 11:51:30.365707 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:30.365713 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:30.365775 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:30.395137 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:30.395159 1888817 cri.go:89] found id: ""
	I1217 11:51:30.395169 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:30.395232 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:30.399272 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:30.399361 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:30.427179 1888817 cri.go:89] found id: ""
	I1217 11:51:30.427207 1888817 logs.go:282] 0 containers: []
	W1217 11:51:30.427217 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:30.427225 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:30.427286 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:30.454590 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:30.454613 1888817 cri.go:89] found id: ""
	I1217 11:51:30.454622 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:51:30.454683 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:30.458789 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:30.458857 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:30.487217 1888817 cri.go:89] found id: ""
	I1217 11:51:30.487247 1888817 logs.go:282] 0 containers: []
	W1217 11:51:30.487257 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:30.487265 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:30.487328 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:30.515251 1888817 cri.go:89] found id: ""
	I1217 11:51:30.515280 1888817 logs.go:282] 0 containers: []
	W1217 11:51:30.515291 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:30.515304 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:30.515320 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:30.572935 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:30.572957 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:30.572972 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:30.604092 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:30.604123 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:30.633298 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:30.633325 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:30.659745 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:30.659774 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:30.706623 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:30.706660 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:30.744694 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:30.744738 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:30.831993 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:30.832035 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:30.993071 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:33.692408 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:34.192127 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:34.691498 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:35.192020 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:35.691526 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:36.192315 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:36.691434 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:37.191807 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:37.692440 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:38.191890 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:33.350864 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:33.351376 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:33.351435 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:33.351494 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:33.380712 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:33.380732 1888817 cri.go:89] found id: ""
	I1217 11:51:33.380740 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:33.380790 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:33.384856 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:33.384918 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:33.412503 1888817 cri.go:89] found id: ""
	I1217 11:51:33.412528 1888817 logs.go:282] 0 containers: []
	W1217 11:51:33.412572 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:33.412581 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:33.412657 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:33.440472 1888817 cri.go:89] found id: ""
	I1217 11:51:33.440502 1888817 logs.go:282] 0 containers: []
	W1217 11:51:33.440513 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:33.440520 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:33.440599 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:33.469592 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:33.469618 1888817 cri.go:89] found id: ""
	I1217 11:51:33.469629 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:33.469684 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:33.474085 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:33.474160 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:33.503820 1888817 cri.go:89] found id: ""
	I1217 11:51:33.503852 1888817 logs.go:282] 0 containers: []
	W1217 11:51:33.503863 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:33.503870 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:33.503925 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:33.532181 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:33.532203 1888817 cri.go:89] found id: ""
	I1217 11:51:33.532211 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:51:33.532264 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:33.536648 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:33.536711 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:33.565378 1888817 cri.go:89] found id: ""
	I1217 11:51:33.565403 1888817 logs.go:282] 0 containers: []
	W1217 11:51:33.565412 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:33.565419 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:33.565468 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:33.595338 1888817 cri.go:89] found id: ""
	I1217 11:51:33.595363 1888817 logs.go:282] 0 containers: []
	W1217 11:51:33.595382 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:33.595394 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:33.595419 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:33.673594 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:33.673632 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:33.691049 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:33.691078 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:33.765027 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:33.765053 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:33.765069 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:33.799721 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:33.799758 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:33.829406 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:33.829438 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:33.859434 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:33.859462 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:33.908917 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:33.908954 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:36.443375 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:36.443873 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:36.443928 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:36.443982 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:36.474297 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:36.474319 1888817 cri.go:89] found id: ""
	I1217 11:51:36.474330 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:36.474391 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:36.478782 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:36.478864 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:36.509603 1888817 cri.go:89] found id: ""
	I1217 11:51:36.509628 1888817 logs.go:282] 0 containers: []
	W1217 11:51:36.509637 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:36.509643 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:36.509701 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:36.541105 1888817 cri.go:89] found id: ""
	I1217 11:51:36.541141 1888817 logs.go:282] 0 containers: []
	W1217 11:51:36.541152 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:36.541161 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:36.541235 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:36.572024 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:36.572050 1888817 cri.go:89] found id: ""
	I1217 11:51:36.572061 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:36.572129 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:36.577026 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:36.577112 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:36.606990 1888817 cri.go:89] found id: ""
	I1217 11:51:36.607028 1888817 logs.go:282] 0 containers: []
	W1217 11:51:36.607037 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:36.607048 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:36.607106 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:36.636475 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:36.636497 1888817 cri.go:89] found id: ""
	I1217 11:51:36.636508 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:51:36.636588 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:36.640902 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:36.640989 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:36.670758 1888817 cri.go:89] found id: ""
	I1217 11:51:36.670792 1888817 logs.go:282] 0 containers: []
	W1217 11:51:36.670801 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:36.670810 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:36.670865 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:36.699911 1888817 cri.go:89] found id: ""
	I1217 11:51:36.699939 1888817 logs.go:282] 0 containers: []
	W1217 11:51:36.699948 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:36.699957 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:36.699970 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:36.729994 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:36.730027 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:36.760984 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:36.761019 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:36.812636 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:36.812675 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:36.844610 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:36.844641 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:36.924803 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:36.924839 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:36.942431 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:36.942461 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:37.000468 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:37.000494 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:37.000510 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:38.691503 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:39.191756 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:39.691605 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:40.192061 1917124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:51:40.262095 1917124 kubeadm.go:1114] duration metric: took 12.662036068s to wait for elevateKubeSystemPrivileges
	I1217 11:51:40.262130 1917124 kubeadm.go:403] duration metric: took 21.521934882s to StartCluster
	I1217 11:51:40.262158 1917124 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:51:40.262234 1917124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:51:40.263450 1917124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:51:40.263725 1917124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:51:40.263744 1917124 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:51:40.263819 1917124 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:51:40.263895 1917124 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-401285"
	I1217 11:51:40.263917 1917124 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-401285"
	I1217 11:51:40.263951 1917124 host.go:66] Checking if "old-k8s-version-401285" exists ...
	I1217 11:51:40.263967 1917124 config.go:182] Loaded profile config "old-k8s-version-401285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 11:51:40.263963 1917124 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-401285"
	I1217 11:51:40.264002 1917124 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-401285"
	I1217 11:51:40.264428 1917124 cli_runner.go:164] Run: docker container inspect old-k8s-version-401285 --format={{.State.Status}}
	I1217 11:51:40.264500 1917124 cli_runner.go:164] Run: docker container inspect old-k8s-version-401285 --format={{.State.Status}}
	I1217 11:51:40.268154 1917124 out.go:179] * Verifying Kubernetes components...
	I1217 11:51:40.269963 1917124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:51:40.290514 1917124 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-401285"
	I1217 11:51:40.290571 1917124 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:51:35.994233 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 11:51:35.994310 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:35.994379 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:36.031439 1894629 cri.go:89] found id: "070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:51:36.031460 1894629 cri.go:89] found id: "2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:36.031464 1894629 cri.go:89] found id: ""
	I1217 11:51:36.031472 1894629 logs.go:282] 2 containers: [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16]
	I1217 11:51:36.031522 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:36.035629 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:36.039338 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:36.039408 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:36.075834 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:36.075856 1894629 cri.go:89] found id: ""
	I1217 11:51:36.075866 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:36.075926 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:36.079797 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:36.079861 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:36.115820 1894629 cri.go:89] found id: ""
	I1217 11:51:36.115848 1894629 logs.go:282] 0 containers: []
	W1217 11:51:36.115859 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:36.115868 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:36.115929 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:36.152989 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:36.153011 1894629 cri.go:89] found id: ""
	I1217 11:51:36.153020 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:36.153071 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:36.156891 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:36.156966 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:36.195963 1894629 cri.go:89] found id: ""
	I1217 11:51:36.195989 1894629 logs.go:282] 0 containers: []
	W1217 11:51:36.195997 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:36.196003 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:36.196068 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:36.235862 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:36.235888 1894629 cri.go:89] found id: ""
	I1217 11:51:36.235897 1894629 logs.go:282] 1 containers: [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6]
	I1217 11:51:36.235951 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:36.239816 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:36.239871 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:36.280110 1894629 cri.go:89] found id: ""
	I1217 11:51:36.280133 1894629 logs.go:282] 0 containers: []
	W1217 11:51:36.280142 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:36.280148 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:36.280211 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:36.317159 1894629 cri.go:89] found id: ""
	I1217 11:51:36.317183 1894629 logs.go:282] 0 containers: []
	W1217 11:51:36.317190 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:36.317207 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:36.317221 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:36.354988 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:36.355030 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:36.397619 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:36.397653 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:36.488055 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:36.488093 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:36.510593 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:36.510623 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 11:51:40.290577 1917124 host.go:66] Checking if "old-k8s-version-401285" exists ...
	I1217 11:51:40.291162 1917124 cli_runner.go:164] Run: docker container inspect old-k8s-version-401285 --format={{.State.Status}}
	I1217 11:51:40.292818 1917124 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:51:40.292852 1917124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:51:40.292904 1917124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:51:40.323593 1917124 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:51:40.323676 1917124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:51:40.323757 1917124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:51:40.323907 1917124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/old-k8s-version-401285/id_rsa Username:docker}
	I1217 11:51:40.344062 1917124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/old-k8s-version-401285/id_rsa Username:docker}
	I1217 11:51:40.364408 1917124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:51:40.416254 1917124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:51:40.434686 1917124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:51:40.454993 1917124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:51:40.611301 1917124 start.go:1013] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 11:51:40.612725 1917124 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-401285" to be "Ready" ...
	I1217 11:51:40.831485 1917124 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 11:51:40.832761 1917124 addons.go:530] duration metric: took 568.93833ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 11:51:41.115636 1917124 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-401285" context rescaled to 1 replicas
	W1217 11:51:42.616092 1917124 node_ready.go:57] node "old-k8s-version-401285" has "Ready":"False" status (will retry)
	I1217 11:51:39.532480 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:39.532996 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:39.533059 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:39.533119 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:39.561242 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:39.561263 1888817 cri.go:89] found id: ""
	I1217 11:51:39.561271 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:39.561357 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:39.565391 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:39.565465 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:39.593049 1888817 cri.go:89] found id: ""
	I1217 11:51:39.593078 1888817 logs.go:282] 0 containers: []
	W1217 11:51:39.593090 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:39.593098 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:39.593171 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:39.621457 1888817 cri.go:89] found id: ""
	I1217 11:51:39.621488 1888817 logs.go:282] 0 containers: []
	W1217 11:51:39.621500 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:39.621508 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:39.621581 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:39.649399 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:39.649421 1888817 cri.go:89] found id: ""
	I1217 11:51:39.649432 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:39.649504 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:39.653683 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:39.653765 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:39.681395 1888817 cri.go:89] found id: ""
	I1217 11:51:39.681420 1888817 logs.go:282] 0 containers: []
	W1217 11:51:39.681428 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:39.681434 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:39.681482 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:39.712780 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:39.712807 1888817 cri.go:89] found id: ""
	I1217 11:51:39.712819 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:51:39.712884 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:39.717840 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:39.717899 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:39.750215 1888817 cri.go:89] found id: ""
	I1217 11:51:39.750250 1888817 logs.go:282] 0 containers: []
	W1217 11:51:39.750263 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:39.750271 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:39.750379 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:39.781413 1888817 cri.go:89] found id: ""
	I1217 11:51:39.781438 1888817 logs.go:282] 0 containers: []
	W1217 11:51:39.781453 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:39.781464 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:39.781479 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:39.809508 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:39.809551 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:39.859316 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:39.859352 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:39.891838 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:39.891868 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:40.006792 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:40.006842 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:40.026360 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:40.026390 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:40.085320 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:40.085341 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:40.085355 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:40.122160 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:40.122208 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:42.659916 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:42.660342 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:42.660412 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:42.660459 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:42.690468 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:42.690488 1888817 cri.go:89] found id: ""
	I1217 11:51:42.690496 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:42.690567 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:42.695257 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:42.695330 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:42.725728 1888817 cri.go:89] found id: ""
	I1217 11:51:42.725761 1888817 logs.go:282] 0 containers: []
	W1217 11:51:42.725772 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:42.725780 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:42.725842 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:42.753516 1888817 cri.go:89] found id: ""
	I1217 11:51:42.753552 1888817 logs.go:282] 0 containers: []
	W1217 11:51:42.753564 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:42.753572 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:42.753626 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:42.781455 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:42.781483 1888817 cri.go:89] found id: ""
	I1217 11:51:42.781496 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:42.781588 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:42.785586 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:42.785648 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:42.817928 1888817 cri.go:89] found id: ""
	I1217 11:51:42.817956 1888817 logs.go:282] 0 containers: []
	W1217 11:51:42.817967 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:42.817975 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:42.818041 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:42.846890 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:42.846918 1888817 cri.go:89] found id: ""
	I1217 11:51:42.846930 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:51:42.846999 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:42.851292 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:42.851361 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:42.879925 1888817 cri.go:89] found id: ""
	I1217 11:51:42.879949 1888817 logs.go:282] 0 containers: []
	W1217 11:51:42.879956 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:42.879962 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:42.880010 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:42.907387 1888817 cri.go:89] found id: ""
	I1217 11:51:42.907418 1888817 logs.go:282] 0 containers: []
	W1217 11:51:42.907429 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:42.907450 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:42.907468 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:42.934991 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:42.935017 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:42.984765 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:42.984798 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:43.017114 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:43.017140 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:43.098882 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:43.098918 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:43.117135 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:43.117169 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:43.174735 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:43.174758 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:43.174775 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:43.208708 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:43.208743 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	W1217 11:51:45.115807 1917124 node_ready.go:57] node "old-k8s-version-401285" has "Ready":"False" status (will retry)
	W1217 11:51:47.115900 1917124 node_ready.go:57] node "old-k8s-version-401285" has "Ready":"False" status (will retry)
	I1217 11:51:45.737088 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:45.737563 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:45.737625 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:45.737681 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:45.766950 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:45.766970 1888817 cri.go:89] found id: ""
	I1217 11:51:45.766977 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:45.767032 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:45.771237 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:45.771292 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:45.800415 1888817 cri.go:89] found id: ""
	I1217 11:51:45.800446 1888817 logs.go:282] 0 containers: []
	W1217 11:51:45.800457 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:45.800465 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:45.800546 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:45.829970 1888817 cri.go:89] found id: ""
	I1217 11:51:45.830002 1888817 logs.go:282] 0 containers: []
	W1217 11:51:45.830013 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:45.830022 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:45.830093 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:45.859876 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:45.859903 1888817 cri.go:89] found id: ""
	I1217 11:51:45.859913 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:45.859979 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:45.864395 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:45.864482 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:45.893818 1888817 cri.go:89] found id: ""
	I1217 11:51:45.893850 1888817 logs.go:282] 0 containers: []
	W1217 11:51:45.893861 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:45.893869 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:45.893935 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:45.923692 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:45.923716 1888817 cri.go:89] found id: ""
	I1217 11:51:45.923726 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:51:45.923798 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:45.928141 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:45.928222 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:45.957525 1888817 cri.go:89] found id: ""
	I1217 11:51:45.957569 1888817 logs.go:282] 0 containers: []
	W1217 11:51:45.957582 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:45.957590 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:45.957653 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:45.988465 1888817 cri.go:89] found id: ""
	I1217 11:51:45.988506 1888817 logs.go:282] 0 containers: []
	W1217 11:51:45.988518 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:45.988545 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:45.988566 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:46.018841 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:46.018871 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:46.047000 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:46.047028 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:46.098026 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:46.098058 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:46.131698 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:46.131729 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:46.213064 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:46.213097 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:46.230626 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:46.230658 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:46.292089 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:46.292109 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:46.292126 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:46.585080 1894629 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.074435081s)
	W1217 11:51:46.585117 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1217 11:51:46.585124 1894629 logs.go:123] Gathering logs for kube-apiserver [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da] ...
	I1217 11:51:46.585135 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:51:46.627787 1894629 logs.go:123] Gathering logs for kube-apiserver [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16] ...
	I1217 11:51:46.627827 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:46.669822 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:46.669861 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:46.710778 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:46.710814 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:46.788340 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:46.788386 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:49.328800 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:49.398892 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:45122->192.168.85.2:8443: read: connection reset by peer
	I1217 11:51:49.398963 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:49.399022 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:49.439088 1894629 cri.go:89] found id: "070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:51:49.439109 1894629 cri.go:89] found id: "2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:49.439113 1894629 cri.go:89] found id: ""
	I1217 11:51:49.439123 1894629 logs.go:282] 2 containers: [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16]
	I1217 11:51:49.439172 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:49.443152 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:49.446708 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:49.446776 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:49.483602 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:49.483628 1894629 cri.go:89] found id: ""
	I1217 11:51:49.483636 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:49.483686 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:49.487699 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:49.487767 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:49.525997 1894629 cri.go:89] found id: ""
	I1217 11:51:49.526023 1894629 logs.go:282] 0 containers: []
	W1217 11:51:49.526032 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:49.526038 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:49.526083 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:49.564243 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:49.564268 1894629 cri.go:89] found id: ""
	I1217 11:51:49.564278 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:49.564336 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:49.568464 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:49.568541 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:49.605557 1894629 cri.go:89] found id: ""
	I1217 11:51:49.605588 1894629 logs.go:282] 0 containers: []
	W1217 11:51:49.605600 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:49.605608 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:49.605670 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:49.644310 1894629 cri.go:89] found id: "fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126"
	I1217 11:51:49.644342 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:49.644347 1894629 cri.go:89] found id: ""
	I1217 11:51:49.644355 1894629 logs.go:282] 2 containers: [fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6]
	I1217 11:51:49.644406 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:49.648401 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:49.652226 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:49.652281 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:49.688684 1894629 cri.go:89] found id: ""
	I1217 11:51:49.688711 1894629 logs.go:282] 0 containers: []
	W1217 11:51:49.688720 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:49.688726 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:49.688780 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:49.726975 1894629 cri.go:89] found id: ""
	I1217 11:51:49.727007 1894629 logs.go:282] 0 containers: []
	W1217 11:51:49.727019 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:49.727033 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:49.727049 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:49.789842 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:49.789864 1894629 logs.go:123] Gathering logs for kube-apiserver [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da] ...
	I1217 11:51:49.789880 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:51:49.832742 1894629 logs.go:123] Gathering logs for kube-apiserver [2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16] ...
	I1217 11:51:49.832772 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2628150b760084194730f7a892bef00100ccd1297dba7034862df6277248ac16"
	I1217 11:51:49.872968 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:49.872999 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:49.909694 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:49.909727 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:49.998888 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:49.998939 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:50.018165 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:50.018197 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:50.060196 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:50.060232 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:50.127897 1894629 logs.go:123] Gathering logs for kube-controller-manager [fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126] ...
	I1217 11:51:50.127935 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126"
	I1217 11:51:50.165527 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:50.165579 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:50.203822 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:50.203854 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 11:51:49.116487 1917124 node_ready.go:57] node "old-k8s-version-401285" has "Ready":"False" status (will retry)
	W1217 11:51:51.616928 1917124 node_ready.go:57] node "old-k8s-version-401285" has "Ready":"False" status (will retry)
	I1217 11:51:48.825695 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:48.826163 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:48.826227 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:48.826291 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:48.857720 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:48.857750 1888817 cri.go:89] found id: ""
	I1217 11:51:48.857762 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:48.857828 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:48.862066 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:48.862138 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:48.889972 1888817 cri.go:89] found id: ""
	I1217 11:51:48.889998 1888817 logs.go:282] 0 containers: []
	W1217 11:51:48.890005 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:48.890012 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:48.890081 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:48.919686 1888817 cri.go:89] found id: ""
	I1217 11:51:48.919718 1888817 logs.go:282] 0 containers: []
	W1217 11:51:48.919731 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:48.919739 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:48.919808 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:48.948274 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:48.948294 1888817 cri.go:89] found id: ""
	I1217 11:51:48.948302 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:48.948360 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:48.952598 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:48.952682 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:48.981079 1888817 cri.go:89] found id: ""
	I1217 11:51:48.981108 1888817 logs.go:282] 0 containers: []
	W1217 11:51:48.981119 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:48.981127 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:48.981190 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:49.010773 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:49.010795 1888817 cri.go:89] found id: ""
	I1217 11:51:49.010806 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:51:49.010872 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:49.015173 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:49.015243 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:49.044645 1888817 cri.go:89] found id: ""
	I1217 11:51:49.044672 1888817 logs.go:282] 0 containers: []
	W1217 11:51:49.044680 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:49.044686 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:49.044742 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:49.073859 1888817 cri.go:89] found id: ""
	I1217 11:51:49.073887 1888817 logs.go:282] 0 containers: []
	W1217 11:51:49.073895 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:49.073906 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:49.073923 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:49.104189 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:49.104219 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:49.133970 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:49.133999 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:49.182864 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:49.182901 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:49.215198 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:49.215226 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:49.294725 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:49.294759 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:49.311487 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:49.311520 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:49.369742 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:49.369771 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:49.369787 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:51.903043 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:51.903527 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:51.903623 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:51.903681 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:51.932002 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:51.932028 1888817 cri.go:89] found id: ""
	I1217 11:51:51.932040 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:51.932102 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:51.936238 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:51.936319 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:51.963984 1888817 cri.go:89] found id: ""
	I1217 11:51:51.964018 1888817 logs.go:282] 0 containers: []
	W1217 11:51:51.964029 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:51.964037 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:51.964093 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:51.992462 1888817 cri.go:89] found id: ""
	I1217 11:51:51.992607 1888817 logs.go:282] 0 containers: []
	W1217 11:51:51.992620 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:51.992628 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:51.992698 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:52.020378 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:52.020406 1888817 cri.go:89] found id: ""
	I1217 11:51:52.020416 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:52.020466 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:52.024436 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:52.024493 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:52.050714 1888817 cri.go:89] found id: ""
	I1217 11:51:52.050743 1888817 logs.go:282] 0 containers: []
	W1217 11:51:52.050754 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:52.050762 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:52.050828 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:52.078820 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:52.078851 1888817 cri.go:89] found id: ""
	I1217 11:51:52.078860 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:51:52.078914 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:52.083294 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:52.083356 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:52.111952 1888817 cri.go:89] found id: ""
	I1217 11:51:52.111978 1888817 logs.go:282] 0 containers: []
	W1217 11:51:52.111989 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:52.111997 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:52.112059 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:52.139627 1888817 cri.go:89] found id: ""
	I1217 11:51:52.139664 1888817 logs.go:282] 0 containers: []
	W1217 11:51:52.139680 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:52.139691 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:52.139716 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:52.168434 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:52.168459 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:52.219875 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:52.219909 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:52.251090 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:52.251123 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:52.334516 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:52.334562 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:52.352170 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:52.352208 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:52.410366 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:52.410393 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:52.410419 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:52.442374 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:52.442407 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:52.745377 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:52.745912 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:51:52.745969 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:52.746019 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:52.784836 1894629 cri.go:89] found id: "070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:51:52.784861 1894629 cri.go:89] found id: ""
	I1217 11:51:52.784870 1894629 logs.go:282] 1 containers: [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da]
	I1217 11:51:52.784921 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:52.789711 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:52.789799 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:52.830740 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:52.830769 1894629 cri.go:89] found id: ""
	I1217 11:51:52.830779 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:52.830834 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:52.835373 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:52.835462 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:52.873767 1894629 cri.go:89] found id: ""
	I1217 11:51:52.873796 1894629 logs.go:282] 0 containers: []
	W1217 11:51:52.873810 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:52.873819 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:52.873874 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:52.911461 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:52.911492 1894629 cri.go:89] found id: ""
	I1217 11:51:52.911503 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:52.911583 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:52.915813 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:52.915893 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:52.953461 1894629 cri.go:89] found id: ""
	I1217 11:51:52.953488 1894629 logs.go:282] 0 containers: []
	W1217 11:51:52.953497 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:52.953504 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:52.953571 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:52.991801 1894629 cri.go:89] found id: "fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126"
	I1217 11:51:52.991824 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:52.991829 1894629 cri.go:89] found id: ""
	I1217 11:51:52.991840 1894629 logs.go:282] 2 containers: [fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6]
	I1217 11:51:52.991903 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:52.996322 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:53.000578 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:53.000653 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:53.038746 1894629 cri.go:89] found id: ""
	I1217 11:51:53.038772 1894629 logs.go:282] 0 containers: []
	W1217 11:51:53.038784 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:53.038793 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:53.038857 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:53.077510 1894629 cri.go:89] found id: ""
	I1217 11:51:53.077559 1894629 logs.go:282] 0 containers: []
	W1217 11:51:53.077572 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:53.077592 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:53.077610 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:53.097985 1894629 logs.go:123] Gathering logs for kube-apiserver [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da] ...
	I1217 11:51:53.098018 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:51:53.140165 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:53.140199 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:53.182150 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:53.182180 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:53.223269 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:53.223304 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:53.266285 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:53.266313 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:53.360851 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:53.360892 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:53.425495 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:53.425516 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:53.425549 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:53.494529 1894629 logs.go:123] Gathering logs for kube-controller-manager [fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126] ...
	I1217 11:51:53.494578 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126"
	I1217 11:51:53.537140 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:53.537168 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:53.616690 1917124 node_ready.go:49] node "old-k8s-version-401285" is "Ready"
	I1217 11:51:53.616721 1917124 node_ready.go:38] duration metric: took 13.003948455s for node "old-k8s-version-401285" to be "Ready" ...
	I1217 11:51:53.616737 1917124 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:51:53.616787 1917124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:51:53.629303 1917124 api_server.go:72] duration metric: took 13.365520826s to wait for apiserver process to appear ...
	I1217 11:51:53.629335 1917124 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:51:53.629355 1917124 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 11:51:53.633746 1917124 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 11:51:53.634970 1917124 api_server.go:141] control plane version: v1.28.0
	I1217 11:51:53.634997 1917124 api_server.go:131] duration metric: took 5.655476ms to wait for apiserver health ...
	I1217 11:51:53.635009 1917124 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:51:53.638665 1917124 system_pods.go:59] 8 kube-system pods found
	I1217 11:51:53.638715 1917124 system_pods.go:61] "coredns-5dd5756b68-nkbwq" [51e50eed-e209-4b55-8081-4f2ef5002d1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:51:53.638727 1917124 system_pods.go:61] "etcd-old-k8s-version-401285" [2637ab4e-d103-4f3d-9c32-c7f9f28be1cc] Running
	I1217 11:51:53.638738 1917124 system_pods.go:61] "kindnet-dmn7l" [c67227b5-e18d-4f67-8fbb-4700dbf1763b] Running
	I1217 11:51:53.638750 1917124 system_pods.go:61] "kube-apiserver-old-k8s-version-401285" [d077ac97-835e-4dd5-80b1-8db0a438d08e] Running
	I1217 11:51:53.638761 1917124 system_pods.go:61] "kube-controller-manager-old-k8s-version-401285" [b2a2411a-3492-4f7a-b340-688eb3e7f5f1] Running
	I1217 11:51:53.638766 1917124 system_pods.go:61] "kube-proxy-5867r" [c0846aab-ff89-4559-9234-78e0ba64b1a0] Running
	I1217 11:51:53.638772 1917124 system_pods.go:61] "kube-scheduler-old-k8s-version-401285" [8b1e1d8c-36fb-46ef-8f29-3b7fea415375] Running
	I1217 11:51:53.638780 1917124 system_pods.go:61] "storage-provisioner" [33659d20-b67e-4d55-97b2-6b5129c163a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:51:53.638793 1917124 system_pods.go:74] duration metric: took 3.773283ms to wait for pod list to return data ...
	I1217 11:51:53.638807 1917124 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:51:53.640888 1917124 default_sa.go:45] found service account: "default"
	I1217 11:51:53.640908 1917124 default_sa.go:55] duration metric: took 2.095349ms for default service account to be created ...
	I1217 11:51:53.640920 1917124 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:51:53.643852 1917124 system_pods.go:86] 8 kube-system pods found
	I1217 11:51:53.643878 1917124 system_pods.go:89] "coredns-5dd5756b68-nkbwq" [51e50eed-e209-4b55-8081-4f2ef5002d1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:51:53.643883 1917124 system_pods.go:89] "etcd-old-k8s-version-401285" [2637ab4e-d103-4f3d-9c32-c7f9f28be1cc] Running
	I1217 11:51:53.643888 1917124 system_pods.go:89] "kindnet-dmn7l" [c67227b5-e18d-4f67-8fbb-4700dbf1763b] Running
	I1217 11:51:53.643892 1917124 system_pods.go:89] "kube-apiserver-old-k8s-version-401285" [d077ac97-835e-4dd5-80b1-8db0a438d08e] Running
	I1217 11:51:53.643897 1917124 system_pods.go:89] "kube-controller-manager-old-k8s-version-401285" [b2a2411a-3492-4f7a-b340-688eb3e7f5f1] Running
	I1217 11:51:53.643902 1917124 system_pods.go:89] "kube-proxy-5867r" [c0846aab-ff89-4559-9234-78e0ba64b1a0] Running
	I1217 11:51:53.643906 1917124 system_pods.go:89] "kube-scheduler-old-k8s-version-401285" [8b1e1d8c-36fb-46ef-8f29-3b7fea415375] Running
	I1217 11:51:53.643911 1917124 system_pods.go:89] "storage-provisioner" [33659d20-b67e-4d55-97b2-6b5129c163a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:51:53.643932 1917124 retry.go:31] will retry after 246.04985ms: missing components: kube-dns
	I1217 11:51:53.894139 1917124 system_pods.go:86] 8 kube-system pods found
	I1217 11:51:53.894171 1917124 system_pods.go:89] "coredns-5dd5756b68-nkbwq" [51e50eed-e209-4b55-8081-4f2ef5002d1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:51:53.894177 1917124 system_pods.go:89] "etcd-old-k8s-version-401285" [2637ab4e-d103-4f3d-9c32-c7f9f28be1cc] Running
	I1217 11:51:53.894183 1917124 system_pods.go:89] "kindnet-dmn7l" [c67227b5-e18d-4f67-8fbb-4700dbf1763b] Running
	I1217 11:51:53.894186 1917124 system_pods.go:89] "kube-apiserver-old-k8s-version-401285" [d077ac97-835e-4dd5-80b1-8db0a438d08e] Running
	I1217 11:51:53.894190 1917124 system_pods.go:89] "kube-controller-manager-old-k8s-version-401285" [b2a2411a-3492-4f7a-b340-688eb3e7f5f1] Running
	I1217 11:51:53.894193 1917124 system_pods.go:89] "kube-proxy-5867r" [c0846aab-ff89-4559-9234-78e0ba64b1a0] Running
	I1217 11:51:53.894196 1917124 system_pods.go:89] "kube-scheduler-old-k8s-version-401285" [8b1e1d8c-36fb-46ef-8f29-3b7fea415375] Running
	I1217 11:51:53.894204 1917124 system_pods.go:89] "storage-provisioner" [33659d20-b67e-4d55-97b2-6b5129c163a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:51:53.894220 1917124 retry.go:31] will retry after 325.470379ms: missing components: kube-dns
	I1217 11:51:54.224677 1917124 system_pods.go:86] 8 kube-system pods found
	I1217 11:51:54.224711 1917124 system_pods.go:89] "coredns-5dd5756b68-nkbwq" [51e50eed-e209-4b55-8081-4f2ef5002d1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:51:54.224718 1917124 system_pods.go:89] "etcd-old-k8s-version-401285" [2637ab4e-d103-4f3d-9c32-c7f9f28be1cc] Running
	I1217 11:51:54.224726 1917124 system_pods.go:89] "kindnet-dmn7l" [c67227b5-e18d-4f67-8fbb-4700dbf1763b] Running
	I1217 11:51:54.224730 1917124 system_pods.go:89] "kube-apiserver-old-k8s-version-401285" [d077ac97-835e-4dd5-80b1-8db0a438d08e] Running
	I1217 11:51:54.224734 1917124 system_pods.go:89] "kube-controller-manager-old-k8s-version-401285" [b2a2411a-3492-4f7a-b340-688eb3e7f5f1] Running
	I1217 11:51:54.224738 1917124 system_pods.go:89] "kube-proxy-5867r" [c0846aab-ff89-4559-9234-78e0ba64b1a0] Running
	I1217 11:51:54.224741 1917124 system_pods.go:89] "kube-scheduler-old-k8s-version-401285" [8b1e1d8c-36fb-46ef-8f29-3b7fea415375] Running
	I1217 11:51:54.224746 1917124 system_pods.go:89] "storage-provisioner" [33659d20-b67e-4d55-97b2-6b5129c163a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:51:54.224761 1917124 retry.go:31] will retry after 302.464307ms: missing components: kube-dns
	I1217 11:51:54.531882 1917124 system_pods.go:86] 8 kube-system pods found
	I1217 11:51:54.531920 1917124 system_pods.go:89] "coredns-5dd5756b68-nkbwq" [51e50eed-e209-4b55-8081-4f2ef5002d1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:51:54.531926 1917124 system_pods.go:89] "etcd-old-k8s-version-401285" [2637ab4e-d103-4f3d-9c32-c7f9f28be1cc] Running
	I1217 11:51:54.531932 1917124 system_pods.go:89] "kindnet-dmn7l" [c67227b5-e18d-4f67-8fbb-4700dbf1763b] Running
	I1217 11:51:54.531937 1917124 system_pods.go:89] "kube-apiserver-old-k8s-version-401285" [d077ac97-835e-4dd5-80b1-8db0a438d08e] Running
	I1217 11:51:54.531941 1917124 system_pods.go:89] "kube-controller-manager-old-k8s-version-401285" [b2a2411a-3492-4f7a-b340-688eb3e7f5f1] Running
	I1217 11:51:54.531944 1917124 system_pods.go:89] "kube-proxy-5867r" [c0846aab-ff89-4559-9234-78e0ba64b1a0] Running
	I1217 11:51:54.531947 1917124 system_pods.go:89] "kube-scheduler-old-k8s-version-401285" [8b1e1d8c-36fb-46ef-8f29-3b7fea415375] Running
	I1217 11:51:54.531952 1917124 system_pods.go:89] "storage-provisioner" [33659d20-b67e-4d55-97b2-6b5129c163a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:51:54.531968 1917124 retry.go:31] will retry after 495.335626ms: missing components: kube-dns
	I1217 11:51:55.032710 1917124 system_pods.go:86] 8 kube-system pods found
	I1217 11:51:55.032750 1917124 system_pods.go:89] "coredns-5dd5756b68-nkbwq" [51e50eed-e209-4b55-8081-4f2ef5002d1e] Running
	I1217 11:51:55.032759 1917124 system_pods.go:89] "etcd-old-k8s-version-401285" [2637ab4e-d103-4f3d-9c32-c7f9f28be1cc] Running
	I1217 11:51:55.032765 1917124 system_pods.go:89] "kindnet-dmn7l" [c67227b5-e18d-4f67-8fbb-4700dbf1763b] Running
	I1217 11:51:55.032773 1917124 system_pods.go:89] "kube-apiserver-old-k8s-version-401285" [d077ac97-835e-4dd5-80b1-8db0a438d08e] Running
	I1217 11:51:55.032779 1917124 system_pods.go:89] "kube-controller-manager-old-k8s-version-401285" [b2a2411a-3492-4f7a-b340-688eb3e7f5f1] Running
	I1217 11:51:55.032784 1917124 system_pods.go:89] "kube-proxy-5867r" [c0846aab-ff89-4559-9234-78e0ba64b1a0] Running
	I1217 11:51:55.032789 1917124 system_pods.go:89] "kube-scheduler-old-k8s-version-401285" [8b1e1d8c-36fb-46ef-8f29-3b7fea415375] Running
	I1217 11:51:55.032802 1917124 system_pods.go:89] "storage-provisioner" [33659d20-b67e-4d55-97b2-6b5129c163a7] Running
	I1217 11:51:55.032815 1917124 system_pods.go:126] duration metric: took 1.391886953s to wait for k8s-apps to be running ...
	I1217 11:51:55.032827 1917124 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:51:55.032884 1917124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:51:55.046429 1917124 system_svc.go:56] duration metric: took 13.576379ms WaitForService to wait for kubelet
	I1217 11:51:55.046466 1917124 kubeadm.go:587] duration metric: took 14.782689358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:51:55.046488 1917124 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:51:55.049107 1917124 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:51:55.049133 1917124 node_conditions.go:123] node cpu capacity is 8
	I1217 11:51:55.049152 1917124 node_conditions.go:105] duration metric: took 2.658615ms to run NodePressure ...
	I1217 11:51:55.049168 1917124 start.go:242] waiting for startup goroutines ...
	I1217 11:51:55.049181 1917124 start.go:247] waiting for cluster config update ...
	I1217 11:51:55.049198 1917124 start.go:256] writing updated cluster config ...
	I1217 11:51:55.049509 1917124 ssh_runner.go:195] Run: rm -f paused
	I1217 11:51:55.053862 1917124 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:51:55.058567 1917124 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-nkbwq" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:55.063186 1917124 pod_ready.go:94] pod "coredns-5dd5756b68-nkbwq" is "Ready"
	I1217 11:51:55.063210 1917124 pod_ready.go:86] duration metric: took 4.61897ms for pod "coredns-5dd5756b68-nkbwq" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:55.066172 1917124 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-401285" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:55.070331 1917124 pod_ready.go:94] pod "etcd-old-k8s-version-401285" is "Ready"
	I1217 11:51:55.070352 1917124 pod_ready.go:86] duration metric: took 4.163006ms for pod "etcd-old-k8s-version-401285" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:55.072866 1917124 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-401285" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:55.078559 1917124 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-401285" is "Ready"
	I1217 11:51:55.078583 1917124 pod_ready.go:86] duration metric: took 5.69779ms for pod "kube-apiserver-old-k8s-version-401285" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:55.081244 1917124 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-401285" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:55.459080 1917124 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-401285" is "Ready"
	I1217 11:51:55.459104 1917124 pod_ready.go:86] duration metric: took 377.837638ms for pod "kube-controller-manager-old-k8s-version-401285" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:55.658702 1917124 pod_ready.go:83] waiting for pod "kube-proxy-5867r" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:56.057684 1917124 pod_ready.go:94] pod "kube-proxy-5867r" is "Ready"
	I1217 11:51:56.057710 1917124 pod_ready.go:86] duration metric: took 398.983198ms for pod "kube-proxy-5867r" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:56.258244 1917124 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-401285" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:56.658316 1917124 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-401285" is "Ready"
	I1217 11:51:56.658350 1917124 pod_ready.go:86] duration metric: took 400.076045ms for pod "kube-scheduler-old-k8s-version-401285" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:51:56.658367 1917124 pod_ready.go:40] duration metric: took 1.604468488s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:51:56.708046 1917124 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 11:51:56.710386 1917124 out.go:203] 
	W1217 11:51:56.711831 1917124 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 11:51:56.713087 1917124 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 11:51:56.714483 1917124 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-401285" cluster and "default" namespace by default
	I1217 11:51:54.972345 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:54.972854 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:54.972919 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:54.972989 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:55.001621 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:55.001646 1888817 cri.go:89] found id: ""
	I1217 11:51:55.001656 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:55.001723 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:55.005926 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:55.006009 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:55.033487 1888817 cri.go:89] found id: ""
	I1217 11:51:55.033515 1888817 logs.go:282] 0 containers: []
	W1217 11:51:55.033524 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:55.033551 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:55.033610 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:55.063607 1888817 cri.go:89] found id: ""
	I1217 11:51:55.063632 1888817 logs.go:282] 0 containers: []
	W1217 11:51:55.063642 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:55.063650 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:55.063706 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:55.093375 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:55.093394 1888817 cri.go:89] found id: ""
	I1217 11:51:55.093413 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:55.093462 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:55.097451 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:55.097514 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:55.125429 1888817 cri.go:89] found id: ""
	I1217 11:51:55.125456 1888817 logs.go:282] 0 containers: []
	W1217 11:51:55.125468 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:55.125477 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:55.125526 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:55.153424 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:55.153452 1888817 cri.go:89] found id: ""
	I1217 11:51:55.153461 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:51:55.153520 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:55.157459 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:55.157524 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:55.184990 1888817 cri.go:89] found id: ""
	I1217 11:51:55.185022 1888817 logs.go:282] 0 containers: []
	W1217 11:51:55.185031 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:55.185037 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:55.185088 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:55.215761 1888817 cri.go:89] found id: ""
	I1217 11:51:55.215791 1888817 logs.go:282] 0 containers: []
	W1217 11:51:55.215799 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:55.215811 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:55.215827 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:55.273200 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:55.273218 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:55.273231 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:55.305527 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:55.305587 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:55.333054 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:55.333081 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:55.360295 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:55.360323 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:55.406954 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:55.406990 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:55.438145 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:55.438174 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:55.524066 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:55.524103 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:58.042630 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:51:58.043061 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:51:58.043123 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:58.043197 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:58.071855 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:58.071882 1888817 cri.go:89] found id: ""
	I1217 11:51:58.071894 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:51:58.071959 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:58.076247 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:58.076326 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:58.104042 1888817 cri.go:89] found id: ""
	I1217 11:51:58.104067 1888817 logs.go:282] 0 containers: []
	W1217 11:51:58.104077 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:51:58.104083 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:58.104135 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:58.132022 1888817 cri.go:89] found id: ""
	I1217 11:51:58.132053 1888817 logs.go:282] 0 containers: []
	W1217 11:51:58.132064 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:51:58.132072 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:58.132123 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:58.159415 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:58.159440 1888817 cri.go:89] found id: ""
	I1217 11:51:58.159451 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:51:58.159513 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:58.163567 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:58.163624 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:58.190462 1888817 cri.go:89] found id: ""
	I1217 11:51:58.190489 1888817 logs.go:282] 0 containers: []
	W1217 11:51:58.190497 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:58.190503 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:58.190584 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:58.219227 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:58.219247 1888817 cri.go:89] found id: ""
	I1217 11:51:58.219255 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:51:58.219304 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:51:58.223464 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:58.223554 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:58.251524 1888817 cri.go:89] found id: ""
	I1217 11:51:58.251590 1888817 logs.go:282] 0 containers: []
	W1217 11:51:58.251601 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:58.251611 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:58.251673 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:58.283199 1888817 cri.go:89] found id: ""
	I1217 11:51:58.283227 1888817 logs.go:282] 0 containers: []
	W1217 11:51:58.283235 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:58.283245 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:51:58.283264 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:56.081786 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:56.082203 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:51:56.082266 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:56.082329 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:56.119627 1894629 cri.go:89] found id: "070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:51:56.119652 1894629 cri.go:89] found id: ""
	I1217 11:51:56.119662 1894629 logs.go:282] 1 containers: [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da]
	I1217 11:51:56.119732 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:56.123720 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:56.123791 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:56.159615 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:56.159644 1894629 cri.go:89] found id: ""
	I1217 11:51:56.159653 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:56.159710 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:56.163862 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:56.164009 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:56.201396 1894629 cri.go:89] found id: ""
	I1217 11:51:56.201425 1894629 logs.go:282] 0 containers: []
	W1217 11:51:56.201436 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:56.201442 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:56.201499 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:56.237327 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:56.237352 1894629 cri.go:89] found id: ""
	I1217 11:51:56.237364 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:56.237419 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:56.241250 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:56.241311 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:56.278646 1894629 cri.go:89] found id: ""
	I1217 11:51:56.278677 1894629 logs.go:282] 0 containers: []
	W1217 11:51:56.278690 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:56.278698 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:56.278760 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:56.315455 1894629 cri.go:89] found id: "fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126"
	I1217 11:51:56.315476 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:56.315480 1894629 cri.go:89] found id: ""
	I1217 11:51:56.315488 1894629 logs.go:282] 2 containers: [fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6]
	I1217 11:51:56.315547 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:56.319714 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:56.323282 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:56.323351 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:56.359635 1894629 cri.go:89] found id: ""
	I1217 11:51:56.359667 1894629 logs.go:282] 0 containers: []
	W1217 11:51:56.359681 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:56.359690 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:56.359750 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:56.396853 1894629 cri.go:89] found id: ""
	I1217 11:51:56.396877 1894629 logs.go:282] 0 containers: []
	W1217 11:51:56.396885 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:56.396899 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:56.396912 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:56.437467 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:56.437496 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:56.503963 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:51:56.504000 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:56.541476 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:51:56.541505 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:56.582117 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:56.582146 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:56.600015 1894629 logs.go:123] Gathering logs for kube-controller-manager [fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126] ...
	I1217 11:51:56.600044 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126"
	I1217 11:51:56.635492 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:56.635519 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:51:56.676144 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:56.676241 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:56.786869 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:56.786914 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:56.854879 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:56.854902 1894629 logs.go:123] Gathering logs for kube-apiserver [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da] ...
	I1217 11:51:56.854917 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:51:59.402151 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:51:59.402580 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:51:59.402651 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:51:59.402716 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:51:59.438967 1894629 cri.go:89] found id: "070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:51:59.438991 1894629 cri.go:89] found id: ""
	I1217 11:51:59.439001 1894629 logs.go:282] 1 containers: [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da]
	I1217 11:51:59.439064 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:59.443048 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:51:59.443118 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:51:59.480348 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:59.480370 1894629 cri.go:89] found id: ""
	I1217 11:51:59.480378 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:51:59.480448 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:59.484299 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:51:59.484359 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:51:59.521097 1894629 cri.go:89] found id: ""
	I1217 11:51:59.521122 1894629 logs.go:282] 0 containers: []
	W1217 11:51:59.521131 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:51:59.521137 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:51:59.521196 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:51:59.558617 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:59.558644 1894629 cri.go:89] found id: ""
	I1217 11:51:59.558653 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:51:59.558713 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:59.562968 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:51:59.563027 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:51:59.598828 1894629 cri.go:89] found id: ""
	I1217 11:51:59.598852 1894629 logs.go:282] 0 containers: []
	W1217 11:51:59.598863 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:51:59.598871 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:51:59.598928 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:51:59.635064 1894629 cri.go:89] found id: "fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126"
	I1217 11:51:59.635089 1894629 cri.go:89] found id: "0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:51:59.635093 1894629 cri.go:89] found id: ""
	I1217 11:51:59.635102 1894629 logs.go:282] 2 containers: [fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6]
	I1217 11:51:59.635158 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:59.639032 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:51:59.642640 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:51:59.642702 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:51:59.678615 1894629 cri.go:89] found id: ""
	I1217 11:51:59.678643 1894629 logs.go:282] 0 containers: []
	W1217 11:51:59.678654 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:51:59.678661 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:51:59.678719 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:51:59.715360 1894629 cri.go:89] found id: ""
	I1217 11:51:59.715397 1894629 logs.go:282] 0 containers: []
	W1217 11:51:59.715409 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:51:59.715449 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:59.715466 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:59.809833 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:59.809870 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:59.829853 1894629 logs.go:123] Gathering logs for kube-apiserver [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da] ...
	I1217 11:51:59.829889 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:51:59.873291 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:51:59.873325 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:51:59.914919 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:51:59.914952 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:51:59.989322 1894629 logs.go:123] Gathering logs for kube-controller-manager [fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126] ...
	I1217 11:51:59.989395 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126"
	I1217 11:52:00.029158 1894629 logs.go:123] Gathering logs for kube-controller-manager [0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6] ...
	I1217 11:52:00.029185 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0661f948bc9bf0a1e6cd9824c2c888930782d9222ecc7acad8f97c9ee52f50e6"
	I1217 11:52:00.067606 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:52:00.067635 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:52:00.107085 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:52:00.107130 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:52:00.170480 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:52:00.170500 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:52:00.170512 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:51:58.315149 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:51:58.315186 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:51:58.396993 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:51:58.397045 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:51:58.413703 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:51:58.413739 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:51:58.470707 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:51:58.470726 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:51:58.470742 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:51:58.502193 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:51:58.502226 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:51:58.530864 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:51:58.530897 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:51:58.557971 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:51:58.558004 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:52:01.108085 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:52:01.108629 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:52:01.108686 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:52:01.108740 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:52:01.138560 1888817 cri.go:89] found id: "8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:52:01.138588 1888817 cri.go:89] found id: ""
	I1217 11:52:01.138599 1888817 logs.go:282] 1 containers: [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985]
	I1217 11:52:01.138663 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:52:01.142917 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:52:01.142986 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:52:01.171247 1888817 cri.go:89] found id: ""
	I1217 11:52:01.171277 1888817 logs.go:282] 0 containers: []
	W1217 11:52:01.171285 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:52:01.171292 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:52:01.171354 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:52:01.198888 1888817 cri.go:89] found id: ""
	I1217 11:52:01.198921 1888817 logs.go:282] 0 containers: []
	W1217 11:52:01.198933 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:52:01.198941 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:52:01.199002 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:52:01.228375 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:52:01.228398 1888817 cri.go:89] found id: ""
	I1217 11:52:01.228406 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:52:01.228456 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:52:01.232677 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:52:01.232745 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:52:01.261248 1888817 cri.go:89] found id: ""
	I1217 11:52:01.261275 1888817 logs.go:282] 0 containers: []
	W1217 11:52:01.261284 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:52:01.261290 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:52:01.261349 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:52:01.290088 1888817 cri.go:89] found id: "4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:52:01.290116 1888817 cri.go:89] found id: ""
	I1217 11:52:01.290127 1888817 logs.go:282] 1 containers: [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96]
	I1217 11:52:01.290193 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:52:01.294337 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:52:01.294402 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:52:01.325217 1888817 cri.go:89] found id: ""
	I1217 11:52:01.325245 1888817 logs.go:282] 0 containers: []
	W1217 11:52:01.325256 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:52:01.325265 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:52:01.325318 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:52:01.354059 1888817 cri.go:89] found id: ""
	I1217 11:52:01.354083 1888817 logs.go:282] 0 containers: []
	W1217 11:52:01.354092 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:52:01.354102 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:52:01.354115 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:52:01.412690 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:52:01.412713 1888817 logs.go:123] Gathering logs for kube-apiserver [8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985] ...
	I1217 11:52:01.412728 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8941febd25a0a7e33e6517aa8c46eba5c8f6fc5f83066e5af5f561b915f9d985"
	I1217 11:52:01.444257 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:52:01.444289 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:52:01.474111 1888817 logs.go:123] Gathering logs for kube-controller-manager [4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96] ...
	I1217 11:52:01.474148 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b9897d0ac58f03dbc400c9137e7a7344ab6bf72b2339608d931ac1db046eb96"
	I1217 11:52:01.503769 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:52:01.503803 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:52:01.552745 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:52:01.552783 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:52:01.586315 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:52:01.586343 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:52:01.673435 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:52:01.673474 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:52:02.713247 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:52:02.713766 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:52:02.713831 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:52:02.713888 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:52:02.751778 1894629 cri.go:89] found id: "070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:52:02.751800 1894629 cri.go:89] found id: ""
	I1217 11:52:02.751809 1894629 logs.go:282] 1 containers: [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da]
	I1217 11:52:02.751857 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:52:02.755720 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:52:02.755779 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:52:02.792085 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:52:02.792113 1894629 cri.go:89] found id: ""
	I1217 11:52:02.792125 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:52:02.792217 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:52:02.796166 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:52:02.796235 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:52:02.833576 1894629 cri.go:89] found id: ""
	I1217 11:52:02.833608 1894629 logs.go:282] 0 containers: []
	W1217 11:52:02.833619 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:52:02.833625 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:52:02.833671 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:52:02.870281 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:52:02.870302 1894629 cri.go:89] found id: ""
	I1217 11:52:02.870311 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:52:02.870372 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:52:02.874318 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:52:02.874383 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:52:02.911157 1894629 cri.go:89] found id: ""
	I1217 11:52:02.911182 1894629 logs.go:282] 0 containers: []
	W1217 11:52:02.911189 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:52:02.911195 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:52:02.911256 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:52:02.949430 1894629 cri.go:89] found id: "fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126"
	I1217 11:52:02.949451 1894629 cri.go:89] found id: ""
	I1217 11:52:02.949460 1894629 logs.go:282] 1 containers: [fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126]
	I1217 11:52:02.949519 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:52:02.953713 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:52:02.953782 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:52:02.991476 1894629 cri.go:89] found id: ""
	I1217 11:52:02.991499 1894629 logs.go:282] 0 containers: []
	W1217 11:52:02.991508 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:52:02.991514 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:52:02.991588 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:52:03.028255 1894629 cri.go:89] found id: ""
	I1217 11:52:03.028285 1894629 logs.go:282] 0 containers: []
	W1217 11:52:03.028299 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:52:03.028317 1894629 logs.go:123] Gathering logs for kube-apiserver [070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da] ...
	I1217 11:52:03.028332 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070bb3eaf5fd071c105f928c22b0be011b0807ff446433027914fab1c8abd2da"
	I1217 11:52:03.067111 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:52:03.067149 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:52:03.107031 1894629 logs.go:123] Gathering logs for kube-controller-manager [fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126] ...
	I1217 11:52:03.107063 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb9326a624469ec65781a09309eaa6aa587c08a4ee6759853f3ca58f57ee5126"
	I1217 11:52:03.143320 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:52:03.143345 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:52:03.180500 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:52:03.180546 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:52:03.221997 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:52:03.222086 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:52:03.311436 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:52:03.311472 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:52:03.331729 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:52:03.331759 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:52:03.393666 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:52:03.393685 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:52:03.393697 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	
	
	==> CRI-O <==
	Dec 17 11:51:53 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:53.919946798Z" level=info msg="Starting container: 5e2e4058d3ebb6027f8973a07717743f265d242eda8375674a2fe997e248e588" id=47bf3bec-49ee-406c-ab78-cb520591a45c name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:51:53 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:53.921950806Z" level=info msg="Started container" PID=2225 containerID=5e2e4058d3ebb6027f8973a07717743f265d242eda8375674a2fe997e248e588 description=kube-system/coredns-5dd5756b68-nkbwq/coredns id=47bf3bec-49ee-406c-ab78-cb520591a45c name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e5991993b26e18704c5ff27460db4b18407ec6709f03c98153ce6f28a4e5c79
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.175961034Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5647e395-88b9-49e3-9b67-2847416c0c4c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.176035058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.180944234Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:537c06e8c607ab0b654a9f746bb72698a82b07d27ad1615952f0d59044f734ce UID:6087a098-1923-4b52-82a5-cfa6127e5a10 NetNS:/var/run/netns/ef03dd55-7d7a-4f2a-9e7b-31897dc8664f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009064b8}] Aliases:map[]}"
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.180975228Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.19026118Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:537c06e8c607ab0b654a9f746bb72698a82b07d27ad1615952f0d59044f734ce UID:6087a098-1923-4b52-82a5-cfa6127e5a10 NetNS:/var/run/netns/ef03dd55-7d7a-4f2a-9e7b-31897dc8664f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009064b8}] Aliases:map[]}"
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.190405283Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.191210698Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.192086889Z" level=info msg="Ran pod sandbox 537c06e8c607ab0b654a9f746bb72698a82b07d27ad1615952f0d59044f734ce with infra container: default/busybox/POD" id=5647e395-88b9-49e3-9b67-2847416c0c4c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.193254001Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ce490819-0907-4d9e-a2d9-f844a7225246 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.193371563Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ce490819-0907-4d9e-a2d9-f844a7225246 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.193429343Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ce490819-0907-4d9e-a2d9-f844a7225246 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.193976355Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f797a126-339c-4a7c-82bf-30aad2739f98 name=/runtime.v1.ImageService/PullImage
	Dec 17 11:51:57 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:57.198220777Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 11:51:59 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:59.124181756Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f797a126-339c-4a7c-82bf-30aad2739f98 name=/runtime.v1.ImageService/PullImage
	Dec 17 11:51:59 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:59.125163409Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8cb5e4dd-e93f-4ab6-9ba9-5900539e2164 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:51:59 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:59.126644509Z" level=info msg="Creating container: default/busybox/busybox" id=44d1659a-4856-4767-bc6f-600b79792233 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:51:59 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:59.126791453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:51:59 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:59.13057645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:51:59 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:59.131113231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:51:59 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:59.159101907Z" level=info msg="Created container 123e8421f766f055a49c4007b199666b555fc139c1720ae0d2a7db475b507209: default/busybox/busybox" id=44d1659a-4856-4767-bc6f-600b79792233 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:51:59 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:59.159764777Z" level=info msg="Starting container: 123e8421f766f055a49c4007b199666b555fc139c1720ae0d2a7db475b507209" id=ccb34d26-54af-4917-94d2-9de9c226aeb8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:51:59 old-k8s-version-401285 crio[822]: time="2025-12-17T11:51:59.161486533Z" level=info msg="Started container" PID=2301 containerID=123e8421f766f055a49c4007b199666b555fc139c1720ae0d2a7db475b507209 description=default/busybox/busybox id=ccb34d26-54af-4917-94d2-9de9c226aeb8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=537c06e8c607ab0b654a9f746bb72698a82b07d27ad1615952f0d59044f734ce
	Dec 17 11:52:04 old-k8s-version-401285 crio[822]: time="2025-12-17T11:52:04.95800816Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	123e8421f766f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   537c06e8c607a       busybox                                          default
	5e2e4058d3ebb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   1e5991993b26e       coredns-5dd5756b68-nkbwq                         kube-system
	fef74a1ef9509       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   d5dc71be381f7       storage-provisioner                              kube-system
	1f97eccca67db       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   84c98f72a77ae       kindnet-dmn7l                                    kube-system
	121439e5783bc       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   7f003b6a00919       kube-proxy-5867r                                 kube-system
	dbac4197e1ca0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   2cc45d4987fea       etcd-old-k8s-version-401285                      kube-system
	90418199d770c       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   f771e3db07dd9       kube-apiserver-old-k8s-version-401285            kube-system
	f7d11dc060462       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   ff41af45dc061       kube-scheduler-old-k8s-version-401285            kube-system
	6069eab8f0078       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   b5d674fdd03cb       kube-controller-manager-old-k8s-version-401285   kube-system
	
	
	==> coredns [5e2e4058d3ebb6027f8973a07717743f265d242eda8375674a2fe997e248e588] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58680 - 32045 "HINFO IN 3730874142446030847.2169022523053158845. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027330472s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-401285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-401285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=old-k8s-version-401285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_51_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:51:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-401285
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:51:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:51:57 +0000   Wed, 17 Dec 2025 11:51:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:51:57 +0000   Wed, 17 Dec 2025 11:51:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:51:57 +0000   Wed, 17 Dec 2025 11:51:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:51:57 +0000   Wed, 17 Dec 2025 11:51:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-401285
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                025c44d6-1ab3-4126-8994-078d0fca59b0
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-nkbwq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-401285                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-dmn7l                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-401285             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-401285    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-5867r                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-401285             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node old-k8s-version-401285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-401285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-401285 event: Registered Node old-k8s-version-401285 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-401285 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [dbac4197e1ca04c60736755bc12db332a2d4987c91a5de9524290749ebaa235b] <==
	{"level":"info","ts":"2025-12-17T11:51:22.255292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-17T11:51:22.256361Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-17T11:51:22.25743Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-17T11:51:22.257755Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T11:51:22.257794Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T11:51:22.257918Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T11:51:22.257952Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T11:51:22.645449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-17T11:51:22.6455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-17T11:51:22.645563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-12-17T11:51:22.645583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-12-17T11:51:22.645591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-17T11:51:22.645608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-12-17T11:51:22.645618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-17T11:51:22.646728Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T11:51:22.646937Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-401285 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T11:51:22.646964Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:51:22.646944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:51:22.647154Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T11:51:22.647194Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T11:51:22.647603Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T11:51:22.647786Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T11:51:22.647826Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T11:51:22.649637Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T11:51:22.649738Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 11:52:06 up  5:34,  0 user,  load average: 2.56, 2.70, 1.87
	Linux old-k8s-version-401285 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1f97eccca67db4918b68120f0084a92fd54fa86b0f22b2e85e9535f54a91760e] <==
	I1217 11:51:42.871623       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:51:42.871874       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 11:51:42.872050       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:51:42.872077       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:51:42.872102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:51:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:51:43.077343       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:51:43.077367       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:51:43.077382       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:51:43.166290       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:51:43.504802       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:51:43.504855       1 metrics.go:72] Registering metrics
	I1217 11:51:43.565926       1 controller.go:711] "Syncing nftables rules"
	I1217 11:51:53.084684       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:51:53.084729       1 main.go:301] handling current node
	I1217 11:52:03.079656       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:52:03.079688       1 main.go:301] handling current node
	
	
	==> kube-apiserver [90418199d770cee370c6a14821a26d77d85e602f55ac7c4d59a4b37a3e823712] <==
	I1217 11:51:23.796415       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:51:23.796426       1 cache.go:39] Caches are synced for autoregister controller
	I1217 11:51:23.796527       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1217 11:51:23.796555       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1217 11:51:23.800836       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 11:51:23.802066       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1217 11:51:23.802120       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1217 11:51:23.803246       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 11:51:23.820182       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1217 11:51:23.828412       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:51:24.700846       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 11:51:24.704924       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 11:51:24.704946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:51:25.118336       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:51:25.155881       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:51:25.210722       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 11:51:25.215909       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1217 11:51:25.216864       1 controller.go:624] quota admission added evaluator for: endpoints
	I1217 11:51:25.221624       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:51:25.742406       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 11:51:26.666932       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 11:51:26.677185       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 11:51:26.688676       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1217 11:51:39.957754       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1217 11:51:40.104620       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6069eab8f0078720a5056cb919f524882f1e340bf54ca59461821a1e0c423f52] <==
	I1217 11:51:40.104625       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 11:51:40.110175       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 11:51:40.110378       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1217 11:51:40.127555       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-v4828"
	I1217 11:51:40.138573       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nkbwq"
	I1217 11:51:40.142482       1 shared_informer.go:318] Caches are synced for attach detach
	I1217 11:51:40.144581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.348633ms"
	I1217 11:51:40.152012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.376301ms"
	I1217 11:51:40.152153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.845µs"
	I1217 11:51:40.165244       1 shared_informer.go:318] Caches are synced for persistent volume
	I1217 11:51:40.192262       1 shared_informer.go:318] Caches are synced for PV protection
	I1217 11:51:40.530670       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 11:51:40.542896       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 11:51:40.543017       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 11:51:40.639064       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1217 11:51:40.652474       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-v4828"
	I1217 11:51:40.661642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.572902ms"
	I1217 11:51:40.668555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.831828ms"
	I1217 11:51:40.668664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.473µs"
	I1217 11:51:53.553212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.858µs"
	I1217 11:51:53.575480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.235µs"
	I1217 11:51:54.826649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.183µs"
	I1217 11:51:54.844396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.795517ms"
	I1217 11:51:54.844502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.862µs"
	I1217 11:51:54.961522       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [121439e5783bc6da5acb9347703d2a185e3a1985e1ff4823b544eed446ca78b8] <==
	I1217 11:51:40.505606       1 server_others.go:69] "Using iptables proxy"
	I1217 11:51:40.517323       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1217 11:51:40.544239       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:51:40.547435       1 server_others.go:152] "Using iptables Proxier"
	I1217 11:51:40.547496       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 11:51:40.547507       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 11:51:40.547549       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 11:51:40.547866       1 server.go:846] "Version info" version="v1.28.0"
	I1217 11:51:40.547888       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:51:40.548701       1 config.go:97] "Starting endpoint slice config controller"
	I1217 11:51:40.548766       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 11:51:40.548714       1 config.go:188] "Starting service config controller"
	I1217 11:51:40.548806       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 11:51:40.551341       1 config.go:315] "Starting node config controller"
	I1217 11:51:40.551382       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 11:51:40.649893       1 shared_informer.go:318] Caches are synced for service config
	I1217 11:51:40.649951       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 11:51:40.651606       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f7d11dc0604629b407c18d6ea055942c62f4927ac5d78a318ce0aa8a538b9865] <==
	W1217 11:51:23.758876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1217 11:51:23.758994       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1217 11:51:23.759015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1217 11:51:23.759073       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1217 11:51:24.628847       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1217 11:51:24.628884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1217 11:51:24.636588       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1217 11:51:24.636620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1217 11:51:24.683418       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1217 11:51:24.683448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1217 11:51:24.693692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1217 11:51:24.693721       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1217 11:51:24.738310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1217 11:51:24.738346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1217 11:51:24.810363       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1217 11:51:24.810401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1217 11:51:24.891361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1217 11:51:24.891398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1217 11:51:24.934708       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1217 11:51:24.934746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1217 11:51:24.962452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1217 11:51:24.962496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1217 11:51:25.120124       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1217 11:51:25.120281       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1217 11:51:27.953917       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 11:51:39 old-k8s-version-401285 kubelet[1449]: I1217 11:51:39.981959    1449 topology_manager.go:215] "Topology Admit Handler" podUID="c67227b5-e18d-4f67-8fbb-4700dbf1763b" podNamespace="kube-system" podName="kindnet-dmn7l"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.014295    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0846aab-ff89-4559-9234-78e0ba64b1a0-kube-proxy\") pod \"kube-proxy-5867r\" (UID: \"c0846aab-ff89-4559-9234-78e0ba64b1a0\") " pod="kube-system/kube-proxy-5867r"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.014350    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwvqm\" (UniqueName: \"kubernetes.io/projected/c0846aab-ff89-4559-9234-78e0ba64b1a0-kube-api-access-nwvqm\") pod \"kube-proxy-5867r\" (UID: \"c0846aab-ff89-4559-9234-78e0ba64b1a0\") " pod="kube-system/kube-proxy-5867r"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.014384    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c67227b5-e18d-4f67-8fbb-4700dbf1763b-lib-modules\") pod \"kindnet-dmn7l\" (UID: \"c67227b5-e18d-4f67-8fbb-4700dbf1763b\") " pod="kube-system/kindnet-dmn7l"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.014429    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5z8\" (UniqueName: \"kubernetes.io/projected/c67227b5-e18d-4f67-8fbb-4700dbf1763b-kube-api-access-pp5z8\") pod \"kindnet-dmn7l\" (UID: \"c67227b5-e18d-4f67-8fbb-4700dbf1763b\") " pod="kube-system/kindnet-dmn7l"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.014461    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0846aab-ff89-4559-9234-78e0ba64b1a0-xtables-lock\") pod \"kube-proxy-5867r\" (UID: \"c0846aab-ff89-4559-9234-78e0ba64b1a0\") " pod="kube-system/kube-proxy-5867r"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.014500    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c67227b5-e18d-4f67-8fbb-4700dbf1763b-cni-cfg\") pod \"kindnet-dmn7l\" (UID: \"c67227b5-e18d-4f67-8fbb-4700dbf1763b\") " pod="kube-system/kindnet-dmn7l"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.014572    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c67227b5-e18d-4f67-8fbb-4700dbf1763b-xtables-lock\") pod \"kindnet-dmn7l\" (UID: \"c67227b5-e18d-4f67-8fbb-4700dbf1763b\") " pod="kube-system/kindnet-dmn7l"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.014663    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0846aab-ff89-4559-9234-78e0ba64b1a0-lib-modules\") pod \"kube-proxy-5867r\" (UID: \"c0846aab-ff89-4559-9234-78e0ba64b1a0\") " pod="kube-system/kube-proxy-5867r"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.033147    1449 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.033966    1449 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 11:51:40 old-k8s-version-401285 kubelet[1449]: I1217 11:51:40.794204    1449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5867r" podStartSLOduration=1.794152665 podCreationTimestamp="2025-12-17 11:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:51:40.793860239 +0000 UTC m=+14.151342981" watchObservedRunningTime="2025-12-17 11:51:40.794152665 +0000 UTC m=+14.151635407"
	Dec 17 11:51:42 old-k8s-version-401285 kubelet[1449]: I1217 11:51:42.802935    1449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-dmn7l" podStartSLOduration=1.446514405 podCreationTimestamp="2025-12-17 11:51:39 +0000 UTC" firstStartedPulling="2025-12-17 11:51:40.296277617 +0000 UTC m=+13.653760349" lastFinishedPulling="2025-12-17 11:51:42.652641353 +0000 UTC m=+16.010124092" observedRunningTime="2025-12-17 11:51:42.802687981 +0000 UTC m=+16.160170723" watchObservedRunningTime="2025-12-17 11:51:42.802878148 +0000 UTC m=+16.160360892"
	Dec 17 11:51:53 old-k8s-version-401285 kubelet[1449]: I1217 11:51:53.525395    1449 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 17 11:51:53 old-k8s-version-401285 kubelet[1449]: I1217 11:51:53.551591    1449 topology_manager.go:215] "Topology Admit Handler" podUID="33659d20-b67e-4d55-97b2-6b5129c163a7" podNamespace="kube-system" podName="storage-provisioner"
	Dec 17 11:51:53 old-k8s-version-401285 kubelet[1449]: I1217 11:51:53.553330    1449 topology_manager.go:215] "Topology Admit Handler" podUID="51e50eed-e209-4b55-8081-4f2ef5002d1e" podNamespace="kube-system" podName="coredns-5dd5756b68-nkbwq"
	Dec 17 11:51:53 old-k8s-version-401285 kubelet[1449]: I1217 11:51:53.610642    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/33659d20-b67e-4d55-97b2-6b5129c163a7-tmp\") pod \"storage-provisioner\" (UID: \"33659d20-b67e-4d55-97b2-6b5129c163a7\") " pod="kube-system/storage-provisioner"
	Dec 17 11:51:53 old-k8s-version-401285 kubelet[1449]: I1217 11:51:53.610713    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w2jv\" (UniqueName: \"kubernetes.io/projected/51e50eed-e209-4b55-8081-4f2ef5002d1e-kube-api-access-9w2jv\") pod \"coredns-5dd5756b68-nkbwq\" (UID: \"51e50eed-e209-4b55-8081-4f2ef5002d1e\") " pod="kube-system/coredns-5dd5756b68-nkbwq"
	Dec 17 11:51:53 old-k8s-version-401285 kubelet[1449]: I1217 11:51:53.610803    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7cbf\" (UniqueName: \"kubernetes.io/projected/33659d20-b67e-4d55-97b2-6b5129c163a7-kube-api-access-d7cbf\") pod \"storage-provisioner\" (UID: \"33659d20-b67e-4d55-97b2-6b5129c163a7\") " pod="kube-system/storage-provisioner"
	Dec 17 11:51:53 old-k8s-version-401285 kubelet[1449]: I1217 11:51:53.610834    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e50eed-e209-4b55-8081-4f2ef5002d1e-config-volume\") pod \"coredns-5dd5756b68-nkbwq\" (UID: \"51e50eed-e209-4b55-8081-4f2ef5002d1e\") " pod="kube-system/coredns-5dd5756b68-nkbwq"
	Dec 17 11:51:54 old-k8s-version-401285 kubelet[1449]: I1217 11:51:54.838043    1449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-nkbwq" podStartSLOduration=14.837970091 podCreationTimestamp="2025-12-17 11:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:51:54.826612349 +0000 UTC m=+28.184095091" watchObservedRunningTime="2025-12-17 11:51:54.837970091 +0000 UTC m=+28.195452833"
	Dec 17 11:51:54 old-k8s-version-401285 kubelet[1449]: I1217 11:51:54.845872    1449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.845816683 podCreationTimestamp="2025-12-17 11:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:51:54.845522464 +0000 UTC m=+28.203005206" watchObservedRunningTime="2025-12-17 11:51:54.845816683 +0000 UTC m=+28.203299425"
	Dec 17 11:51:56 old-k8s-version-401285 kubelet[1449]: I1217 11:51:56.874386    1449 topology_manager.go:215] "Topology Admit Handler" podUID="6087a098-1923-4b52-82a5-cfa6127e5a10" podNamespace="default" podName="busybox"
	Dec 17 11:51:56 old-k8s-version-401285 kubelet[1449]: I1217 11:51:56.930270    1449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbjwk\" (UniqueName: \"kubernetes.io/projected/6087a098-1923-4b52-82a5-cfa6127e5a10-kube-api-access-dbjwk\") pod \"busybox\" (UID: \"6087a098-1923-4b52-82a5-cfa6127e5a10\") " pod="default/busybox"
	Dec 17 11:51:59 old-k8s-version-401285 kubelet[1449]: I1217 11:51:59.840948    1449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.909973499 podCreationTimestamp="2025-12-17 11:51:56 +0000 UTC" firstStartedPulling="2025-12-17 11:51:57.193651293 +0000 UTC m=+30.551134028" lastFinishedPulling="2025-12-17 11:51:59.124581908 +0000 UTC m=+32.482064637" observedRunningTime="2025-12-17 11:51:59.840486639 +0000 UTC m=+33.197969381" watchObservedRunningTime="2025-12-17 11:51:59.840904108 +0000 UTC m=+33.198386849"
	
	
	==> storage-provisioner [fef74a1ef9509c852d25a08ae3cf661b32526e396b328b7d2292b66d9c50f314] <==
	I1217 11:51:53.925078       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:51:53.934014       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:51:53.934112       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 11:51:53.942458       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:51:53.942617       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca339107-09fc-488a-9de1-8033a0f945ef", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-401285_1aa05ec6-cb6e-4033-a748-18f30b4f8552 became leader
	I1217 11:51:53.942651       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-401285_1aa05ec6-cb6e-4033-a748-18f30b4f8552!
	I1217 11:51:54.043110       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-401285_1aa05ec6-cb6e-4033-a748-18f30b4f8552!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-401285 -n old-k8s-version-401285
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-401285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-401285 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-401285 --alsologtostderr -v=1: exit status 80 (2.033171783s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-401285 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:53:25.676522 1938991 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:53:25.676829 1938991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:53:25.676841 1938991 out.go:374] Setting ErrFile to fd 2...
	I1217 11:53:25.676844 1938991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:53:25.677101 1938991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:53:25.677408 1938991 out.go:368] Setting JSON to false
	I1217 11:53:25.677442 1938991 mustload.go:66] Loading cluster: old-k8s-version-401285
	I1217 11:53:25.677893 1938991 config.go:182] Loaded profile config "old-k8s-version-401285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 11:53:25.678471 1938991 cli_runner.go:164] Run: docker container inspect old-k8s-version-401285 --format={{.State.Status}}
	I1217 11:53:25.699747 1938991 host.go:66] Checking if "old-k8s-version-401285" exists ...
	I1217 11:53:25.700116 1938991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:53:25.768620 1938991 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:85 OomKillDisable:false NGoroutines:92 SystemTime:2025-12-17 11:53:25.755659889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:53:25.769456 1938991 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-401285 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 11:53:25.772818 1938991 out.go:179] * Pausing node old-k8s-version-401285 ... 
	I1217 11:53:25.774005 1938991 host.go:66] Checking if "old-k8s-version-401285" exists ...
	I1217 11:53:25.774276 1938991 ssh_runner.go:195] Run: systemctl --version
	I1217 11:53:25.774320 1938991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-401285
	I1217 11:53:25.799197 1938991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/old-k8s-version-401285/id_rsa Username:docker}
	I1217 11:53:25.898689 1938991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:53:25.926173 1938991 pause.go:52] kubelet running: true
	I1217 11:53:25.926237 1938991 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:53:26.188773 1938991 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:53:26.188864 1938991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:53:26.277396 1938991 cri.go:89] found id: "8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60"
	I1217 11:53:26.277434 1938991 cri.go:89] found id: "58dc04f8562f4834da267a4a4e1e01fea4f3965d999f69cb7a337c022308ca4a"
	I1217 11:53:26.277439 1938991 cri.go:89] found id: "70af8698b90248c8d07b9effdf600c38738c38914120182e85c36977d7916bf2"
	I1217 11:53:26.277442 1938991 cri.go:89] found id: "cde52efb575362eca44dd3923d8c68b38b0d426bac72946a99c7c44ff4812dcb"
	I1217 11:53:26.277446 1938991 cri.go:89] found id: "3a799da8c577475c5da3a3846bd74a1474d4f4d9552c749aa00155b4a2b65fd9"
	I1217 11:53:26.277471 1938991 cri.go:89] found id: "2f758407de6a5197364df61528ddb100122e25164fa97424a91e1cfbf63d5b32"
	I1217 11:53:26.277474 1938991 cri.go:89] found id: "9aa49d40045e1e67467ef562959460e7790cb28aff33d0ead73eb299efd0348c"
	I1217 11:53:26.277477 1938991 cri.go:89] found id: "149391f8debc5ff3a0624fc6350eb74473e08e63dc1f13dba71547b6cbc7f5ca"
	I1217 11:53:26.277480 1938991 cri.go:89] found id: "8f15dd64ca827bde9a31635cadaed80039200397a5a70c03ee468cf1952c4c87"
	I1217 11:53:26.277500 1938991 cri.go:89] found id: "792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b"
	I1217 11:53:26.277506 1938991 cri.go:89] found id: "7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e"
	I1217 11:53:26.277508 1938991 cri.go:89] found id: ""
	I1217 11:53:26.277598 1938991 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:53:26.293430 1938991 retry.go:31] will retry after 274.104602ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:53:26Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:53:26.567728 1938991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:53:26.582613 1938991 pause.go:52] kubelet running: false
	I1217 11:53:26.582675 1938991 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:53:26.723346 1938991 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:53:26.723428 1938991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:53:26.796080 1938991 cri.go:89] found id: "8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60"
	I1217 11:53:26.796106 1938991 cri.go:89] found id: "58dc04f8562f4834da267a4a4e1e01fea4f3965d999f69cb7a337c022308ca4a"
	I1217 11:53:26.796113 1938991 cri.go:89] found id: "70af8698b90248c8d07b9effdf600c38738c38914120182e85c36977d7916bf2"
	I1217 11:53:26.796118 1938991 cri.go:89] found id: "cde52efb575362eca44dd3923d8c68b38b0d426bac72946a99c7c44ff4812dcb"
	I1217 11:53:26.796122 1938991 cri.go:89] found id: "3a799da8c577475c5da3a3846bd74a1474d4f4d9552c749aa00155b4a2b65fd9"
	I1217 11:53:26.796127 1938991 cri.go:89] found id: "2f758407de6a5197364df61528ddb100122e25164fa97424a91e1cfbf63d5b32"
	I1217 11:53:26.796131 1938991 cri.go:89] found id: "9aa49d40045e1e67467ef562959460e7790cb28aff33d0ead73eb299efd0348c"
	I1217 11:53:26.796136 1938991 cri.go:89] found id: "149391f8debc5ff3a0624fc6350eb74473e08e63dc1f13dba71547b6cbc7f5ca"
	I1217 11:53:26.796139 1938991 cri.go:89] found id: "8f15dd64ca827bde9a31635cadaed80039200397a5a70c03ee468cf1952c4c87"
	I1217 11:53:26.796163 1938991 cri.go:89] found id: "792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b"
	I1217 11:53:26.796168 1938991 cri.go:89] found id: "7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e"
	I1217 11:53:26.796172 1938991 cri.go:89] found id: ""
	I1217 11:53:26.796243 1938991 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:53:26.808524 1938991 retry.go:31] will retry after 513.712395ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:53:26Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:53:27.323345 1938991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:53:27.338123 1938991 pause.go:52] kubelet running: false
	I1217 11:53:27.338204 1938991 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:53:27.518090 1938991 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:53:27.518169 1938991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:53:27.604665 1938991 cri.go:89] found id: "8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60"
	I1217 11:53:27.604692 1938991 cri.go:89] found id: "58dc04f8562f4834da267a4a4e1e01fea4f3965d999f69cb7a337c022308ca4a"
	I1217 11:53:27.604699 1938991 cri.go:89] found id: "70af8698b90248c8d07b9effdf600c38738c38914120182e85c36977d7916bf2"
	I1217 11:53:27.604704 1938991 cri.go:89] found id: "cde52efb575362eca44dd3923d8c68b38b0d426bac72946a99c7c44ff4812dcb"
	I1217 11:53:27.604708 1938991 cri.go:89] found id: "3a799da8c577475c5da3a3846bd74a1474d4f4d9552c749aa00155b4a2b65fd9"
	I1217 11:53:27.604713 1938991 cri.go:89] found id: "2f758407de6a5197364df61528ddb100122e25164fa97424a91e1cfbf63d5b32"
	I1217 11:53:27.604718 1938991 cri.go:89] found id: "9aa49d40045e1e67467ef562959460e7790cb28aff33d0ead73eb299efd0348c"
	I1217 11:53:27.604722 1938991 cri.go:89] found id: "149391f8debc5ff3a0624fc6350eb74473e08e63dc1f13dba71547b6cbc7f5ca"
	I1217 11:53:27.604727 1938991 cri.go:89] found id: "8f15dd64ca827bde9a31635cadaed80039200397a5a70c03ee468cf1952c4c87"
	I1217 11:53:27.604735 1938991 cri.go:89] found id: "792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b"
	I1217 11:53:27.604740 1938991 cri.go:89] found id: "7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e"
	I1217 11:53:27.604744 1938991 cri.go:89] found id: ""
	I1217 11:53:27.604796 1938991 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:53:27.621860 1938991 out.go:203] 
	W1217 11:53:27.623346 1938991 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:53:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:53:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:53:27.623378 1938991 out.go:285] * 
	* 
	W1217 11:53:27.634835 1938991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:53:27.636515 1938991 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-401285 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-401285
helpers_test.go:244: (dbg) docker inspect old-k8s-version-401285:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0",
	        "Created": "2025-12-17T11:51:14.16613837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1929282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:52:24.041685697Z",
	            "FinishedAt": "2025-12-17T11:52:23.118301553Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/hosts",
	        "LogPath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0-json.log",
	        "Name": "/old-k8s-version-401285",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-401285:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-401285",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0",
	                "LowerDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-401285",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-401285/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-401285",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-401285",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-401285",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c64693db121db6266350b6c40d5c57ea3cee68050fb3bfd208af900fde02e4b0",
	            "SandboxKey": "/var/run/docker/netns/c64693db121d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34596"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34597"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34600"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34598"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34599"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-401285": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "28c236790a3f61d93a940c5e5d3e7f4dd4932eb2cb6dabba52c6ea762e486410",
	                    "EndpointID": "fde3e80a0389686ab1d9afb8f29e3d1f88c77a7e4381cfc038b09ba62991aec9",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "12:35:2a:52:6d:d6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-401285",
	                        "2cc7fce2754b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-401285 -n old-k8s-version-401285
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-401285 -n old-k8s-version-401285: exit status 2 (355.047534ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-401285 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-401285 logs -n 25: (1.244635024s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-213935 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo containerd config dump                                                                                                                                                                                                  │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo crio config                                                                                                                                                                                                             │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ delete  │ -p cilium-213935                                                                                                                                                                                                                              │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p force-systemd-flag-881315 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-881315 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ ssh     │ force-systemd-flag-881315 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-881315 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p force-systemd-flag-881315                                                                                                                                                                                                                  │ force-systemd-flag-881315 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p cert-options-714247 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:51 UTC │
	│ ssh     │ cert-options-714247 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ ssh     │ -p cert-options-714247 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ delete  │ -p cert-options-714247                                                                                                                                                                                                                        │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ start   │ -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-401285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │                     │
	│ stop    │ -p old-k8s-version-401285 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-401285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:52 UTC │
	│ start   │ -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p cert-expiration-067996 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-067996    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p cert-expiration-067996                                                                                                                                                                                                                     │ cert-expiration-067996    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-737478         │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ image   │ old-k8s-version-401285 image list --format=json                                                                                                                                                                                               │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ pause   │ -p old-k8s-version-401285 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:53:24
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:53:24.474551 1938284 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:53:24.474851 1938284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:53:24.474862 1938284 out.go:374] Setting ErrFile to fd 2...
	I1217 11:53:24.474866 1938284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:53:24.475097 1938284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:53:24.475709 1938284 out.go:368] Setting JSON to false
	I1217 11:53:24.476899 1938284 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20149,"bootTime":1765952255,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:53:24.476960 1938284 start.go:143] virtualization: kvm guest
	I1217 11:53:24.478995 1938284 out.go:179] * [no-preload-737478] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:53:24.480393 1938284 notify.go:221] Checking for updates...
	I1217 11:53:24.480406 1938284 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:53:24.481956 1938284 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:53:24.483348 1938284 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:53:24.484555 1938284 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:53:24.485712 1938284 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:53:24.487470 1938284 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:53:24.489363 1938284 config.go:182] Loaded profile config "kubernetes-upgrade-556754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:53:24.489599 1938284 config.go:182] Loaded profile config "old-k8s-version-401285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 11:53:24.489772 1938284 config.go:182] Loaded profile config "stopped-upgrade-287611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 11:53:24.489917 1938284 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:53:24.519114 1938284 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:53:24.519364 1938284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:53:24.581135 1938284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 11:53:24.570422169 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:53:24.581248 1938284 docker.go:319] overlay module found
	I1217 11:53:24.582837 1938284 out.go:179] * Using the docker driver based on user configuration
	I1217 11:53:24.584192 1938284 start.go:309] selected driver: docker
	I1217 11:53:24.584208 1938284 start.go:927] validating driver "docker" against <nil>
	I1217 11:53:24.584220 1938284 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:53:24.584852 1938284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:53:24.642856 1938284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 11:53:24.632629573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:53:24.643034 1938284 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:53:24.643243 1938284 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:53:24.645086 1938284 out.go:179] * Using Docker driver with root privileges
	I1217 11:53:24.646327 1938284 cni.go:84] Creating CNI manager for ""
	I1217 11:53:24.646401 1938284 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:53:24.646425 1938284 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:53:24.646528 1938284 start.go:353] cluster config:
	{Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:53:24.647741 1938284 out.go:179] * Starting "no-preload-737478" primary control-plane node in "no-preload-737478" cluster
	I1217 11:53:24.649161 1938284 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:53:24.650424 1938284 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:53:24.652095 1938284 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:53:24.652201 1938284 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:53:24.652214 1938284 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/no-preload-737478/config.json ...
	I1217 11:53:24.652293 1938284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/no-preload-737478/config.json: {Name:mka67a5019c34bf5eb14f70d8ded95908609ca6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:53:24.652365 1938284 cache.go:107] acquiring lock: {Name:mkce365350b466caa625a853fa04d355dafaf737 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652365 1938284 cache.go:107] acquiring lock: {Name:mkb34fd803350485ad0146dad2d5e5975c7a1fbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652401 1938284 cache.go:107] acquiring lock: {Name:mk6a07e7ceeb8fe04825f0802eeaaeeee4c06443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652475 1938284 cache.go:107] acquiring lock: {Name:mk195f08cb3604d752263934a40f27bac4021dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652498 1938284 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 11:53:24.652479 1938284 cache.go:107] acquiring lock: {Name:mk69f66d091b3517cc19ba9a659d980495d072d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652507 1938284 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:24.652482 1938284 cache.go:107] acquiring lock: {Name:mka9f0fd2d6e879a6d51520f3e35096f83561a39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652519 1938284 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 163.44µs
	I1217 11:53:24.652563 1938284 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 11:53:24.652584 1938284 cache.go:107] acquiring lock: {Name:mka6d3f4b4fc66993c428fbcff6e92cde119967c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652579 1938284 cache.go:107] acquiring lock: {Name:mk9b11255ca4aa317635277ae364f17e3f34e430 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652605 1938284 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:24.652726 1938284 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:24.652739 1938284 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:53:24.652765 1938284 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:53:24.652847 1938284 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1217 11:53:24.652870 1938284 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 458.565µs
	I1217 11:53:24.652887 1938284 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 11:53:24.652890 1938284 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:53:24.653996 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:24.654073 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:53:24.654100 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:24.654100 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:53:24.654148 1938284 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:53:24.654488 1938284 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:24.677423 1938284 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:53:24.677443 1938284 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:53:24.677461 1938284 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:53:24.677501 1938284 start.go:360] acquireMachinesLock for no-preload-737478: {Name:mk1ef5e7ed91896001178c3ee81911e4005528d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.677645 1938284 start.go:364] duration metric: took 118.012µs to acquireMachinesLock for "no-preload-737478"
	I1217 11:53:24.677679 1938284 start.go:93] Provisioning new machine with config: &{Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:53:24.677762 1938284 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:53:20.577768 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:53:20.578133 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:53:20.578181 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:20.578221 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:20.617463 1894629 cri.go:89] found id: "f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653"
	I1217 11:53:20.617485 1894629 cri.go:89] found id: ""
	I1217 11:53:20.617493 1894629 logs.go:282] 1 containers: [f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653]
	I1217 11:53:20.617554 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:20.621389 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:53:20.621457 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:20.658964 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:53:20.658990 1894629 cri.go:89] found id: ""
	I1217 11:53:20.659001 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:53:20.659058 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:20.663214 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:53:20.663299 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:20.706508 1894629 cri.go:89] found id: ""
	I1217 11:53:20.706551 1894629 logs.go:282] 0 containers: []
	W1217 11:53:20.706563 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:53:20.706573 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:20.706630 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:20.749845 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:53:20.749866 1894629 cri.go:89] found id: ""
	I1217 11:53:20.749875 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:53:20.749920 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:20.754080 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:20.754139 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:20.794709 1894629 cri.go:89] found id: ""
	I1217 11:53:20.794737 1894629 logs.go:282] 0 containers: []
	W1217 11:53:20.794749 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:20.794758 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:20.794818 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:20.838328 1894629 cri.go:89] found id: "56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add"
	I1217 11:53:20.838367 1894629 cri.go:89] found id: ""
	I1217 11:53:20.838380 1894629 logs.go:282] 1 containers: [56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add]
	I1217 11:53:20.838442 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:20.842792 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:20.842870 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:20.883568 1894629 cri.go:89] found id: ""
	I1217 11:53:20.883599 1894629 logs.go:282] 0 containers: []
	W1217 11:53:20.883613 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:20.883621 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:53:20.883688 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:53:20.927792 1894629 cri.go:89] found id: ""
	I1217 11:53:20.927819 1894629 logs.go:282] 0 containers: []
	W1217 11:53:20.927831 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:53:20.927849 1894629 logs.go:123] Gathering logs for kube-controller-manager [56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add] ...
	I1217 11:53:20.927865 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add"
	I1217 11:53:20.971000 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:53:20.971038 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:53:21.022023 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:53:21.022054 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:53:21.065665 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:21.065701 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:21.182619 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:21.182662 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:21.208183 1894629 logs.go:123] Gathering logs for kube-apiserver [f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653] ...
	I1217 11:53:21.208218 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653"
	I1217 11:53:21.266791 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:53:21.266837 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:53:21.320224 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:53:21.320271 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:53:21.419307 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:21.419360 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:21.491253 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:23.991418 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:53:23.991936 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:53:23.991999 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:23.992066 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:24.035187 1894629 cri.go:89] found id: "f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653"
	I1217 11:53:24.035212 1894629 cri.go:89] found id: ""
	I1217 11:53:24.035223 1894629 logs.go:282] 1 containers: [f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653]
	I1217 11:53:24.035279 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:24.040063 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:53:24.040139 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:24.080627 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:53:24.080657 1894629 cri.go:89] found id: ""
	I1217 11:53:24.080674 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:53:24.080738 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:24.085088 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:53:24.085159 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:24.124655 1894629 cri.go:89] found id: ""
	I1217 11:53:24.124687 1894629 logs.go:282] 0 containers: []
	W1217 11:53:24.124699 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:53:24.124707 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:24.124765 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:24.170576 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:53:24.170603 1894629 cri.go:89] found id: ""
	I1217 11:53:24.170613 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:53:24.170682 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:24.175272 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:24.175338 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:24.218146 1894629 cri.go:89] found id: ""
	I1217 11:53:24.218176 1894629 logs.go:282] 0 containers: []
	W1217 11:53:24.218189 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:24.218202 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:24.218280 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:24.258655 1894629 cri.go:89] found id: "56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add"
	I1217 11:53:24.258683 1894629 cri.go:89] found id: ""
	I1217 11:53:24.258693 1894629 logs.go:282] 1 containers: [56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add]
	I1217 11:53:24.258757 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:24.262986 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:24.263050 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:24.301355 1894629 cri.go:89] found id: ""
	I1217 11:53:24.301392 1894629 logs.go:282] 0 containers: []
	W1217 11:53:24.301405 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:24.301423 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:53:24.301485 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:53:24.343269 1894629 cri.go:89] found id: ""
	I1217 11:53:24.343298 1894629 logs.go:282] 0 containers: []
	W1217 11:53:24.343309 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:53:24.343325 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:24.343341 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:24.470651 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:24.470689 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:24.491814 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:53:24.491840 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:53:24.581166 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:53:24.581199 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:53:24.631244 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:24.631284 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:24.703617 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:24.703643 1894629 logs.go:123] Gathering logs for kube-apiserver [f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653] ...
	I1217 11:53:24.703660 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653"
	I1217 11:53:24.749309 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:53:24.749336 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:53:24.798937 1894629 logs.go:123] Gathering logs for kube-controller-manager [56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add] ...
	I1217 11:53:24.798977 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add"
	I1217 11:53:24.840700 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:53:24.840728 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Dec 17 11:52:52 old-k8s-version-401285 crio[605]: time="2025-12-17T11:52:52.036971708Z" level=info msg="Created container 7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-klmw2/kubernetes-dashboard" id=48a9223e-2e5a-499f-a9c1-a25ef780cb58 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:52:52 old-k8s-version-401285 crio[605]: time="2025-12-17T11:52:52.037597543Z" level=info msg="Starting container: 7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e" id=756f9031-743c-4453-bf87-a625bfbb36e3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:52:52 old-k8s-version-401285 crio[605]: time="2025-12-17T11:52:52.039590843Z" level=info msg="Started container" PID=1785 containerID=7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-klmw2/kubernetes-dashboard id=756f9031-743c-4453-bf87-a625bfbb36e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=834a65035c3c169426fda832b31529d7d07a98e966ca35e4681456a4a6f6364c
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.667215882Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e9b5011e-ce26-43e5-a1a4-246ccedeef7e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.668232244Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0086016b-b336-4d75-a05f-ba6caa77e95a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.669300604Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=17d41051-3c53-483c-8b2d-bc0fd02d0091 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.669471053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.67436617Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.67459191Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e7bdeddb948a757bad49f33a46ff4e8d01624111cf740eff930a93610ad78b13/merged/etc/passwd: no such file or directory"
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.6746256Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e7bdeddb948a757bad49f33a46ff4e8d01624111cf740eff930a93610ad78b13/merged/etc/group: no such file or directory"
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.674920278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.706194503Z" level=info msg="Created container 8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60: kube-system/storage-provisioner/storage-provisioner" id=17d41051-3c53-483c-8b2d-bc0fd02d0091 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.706903923Z" level=info msg="Starting container: 8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60" id=ac6fc2c5-d0bf-41b8-a11f-e0741f67fbc6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.708782676Z" level=info msg="Started container" PID=1810 containerID=8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60 description=kube-system/storage-provisioner/storage-provisioner id=ac6fc2c5-d0bf-41b8-a11f-e0741f67fbc6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=61cdf9ad55c16f6f387cd3de8421128eb1af073de5d5fd84ed53940ba879a4bf
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.54851424Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3b522475-a243-4f4d-bb7b-45693f529cea name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.549666405Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6243b904-6d43-488a-bd62-848390af1815 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.550800908Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v/dashboard-metrics-scraper" id=30a5364a-6a4c-4574-b49e-51bc0a46d7a0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.550942821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.557063135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.557735585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.586709903Z" level=info msg="Created container 792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v/dashboard-metrics-scraper" id=30a5364a-6a4c-4574-b49e-51bc0a46d7a0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.587576335Z" level=info msg="Starting container: 792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b" id=65f449af-3c75-4639-9cc8-e5cdce780cf1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.589515902Z" level=info msg="Started container" PID=1826 containerID=792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v/dashboard-metrics-scraper id=65f449af-3c75-4639-9cc8-e5cdce780cf1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=18a0f40d9f728208fdef42d46f74ecad264ec9717388155d2d65c78abaca993f
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.689165465Z" level=info msg="Removing container: 531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7" id=22f5e297-0c2f-4ed4-936c-b3cf12f4c3ef name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.700408829Z" level=info msg="Removed container 531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v/dashboard-metrics-scraper" id=22f5e297-0c2f-4ed4-936c-b3cf12f4c3ef name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	792abf94632c8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   18a0f40d9f728       dashboard-metrics-scraper-5f989dc9cf-prh7v       kubernetes-dashboard
	8f100212ae2fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   61cdf9ad55c16       storage-provisioner                              kube-system
	7e9d90fea5777       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   834a65035c3c1       kubernetes-dashboard-8694d4445c-klmw2            kubernetes-dashboard
	ffd23d6beaaa5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   80a76c3dda141       busybox                                          default
	58dc04f8562f4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   122c75942cc9a       coredns-5dd5756b68-nkbwq                         kube-system
	70af8698b9024       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   9b4695112bb31       kindnet-dmn7l                                    kube-system
	cde52efb57536       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   61cdf9ad55c16       storage-provisioner                              kube-system
	3a799da8c5774       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   5966e2cde48f8       kube-proxy-5867r                                 kube-system
	2f758407de6a5       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   174d4da11964a       kube-controller-manager-old-k8s-version-401285   kube-system
	9aa49d40045e1       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   3913c4020003f       kube-scheduler-old-k8s-version-401285            kube-system
	149391f8debc5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   d5179f4a807d2       etcd-old-k8s-version-401285                      kube-system
	8f15dd64ca827       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   8ebf8c50641f4       kube-apiserver-old-k8s-version-401285            kube-system
	
	
	==> coredns [58dc04f8562f4834da267a4a4e1e01fea4f3965d999f69cb7a337c022308ca4a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57122 - 50041 "HINFO IN 6731898185667097896.3061840381048538122. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028662591s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-401285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-401285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=old-k8s-version-401285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_51_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:51:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-401285
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:53:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:53:04 +0000   Wed, 17 Dec 2025 11:51:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:53:04 +0000   Wed, 17 Dec 2025 11:51:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:53:04 +0000   Wed, 17 Dec 2025 11:51:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:53:04 +0000   Wed, 17 Dec 2025 11:51:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-401285
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                025c44d6-1ab3-4126-8994-078d0fca59b0
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-nkbwq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-401285                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-dmn7l                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-401285             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-401285    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-5867r                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-401285             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-prh7v        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-klmw2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-401285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s                 kubelet          Node old-k8s-version-401285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s                 kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m2s                 kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-401285 event: Registered Node old-k8s-version-401285 in Controller
	  Normal  NodeReady                95s                  kubelet          Node old-k8s-version-401285 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x9 over 58s)    kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-401285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x7 over 58s)    kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                  node-controller  Node old-k8s-version-401285 event: Registered Node old-k8s-version-401285 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [149391f8debc5ff3a0624fc6350eb74473e08e63dc1f13dba71547b6cbc7f5ca] <==
	{"level":"info","ts":"2025-12-17T11:52:31.132935Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T11:52:31.132946Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T11:52:31.13306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-17T11:52:31.133154Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-17T11:52:31.133305Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T11:52:31.133349Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T11:52:31.135686Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-17T11:52:31.136258Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T11:52:31.136309Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T11:52:31.137067Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T11:52:31.137123Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T11:52:32.322876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T11:52:32.322926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T11:52:32.322947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-17T11:52:32.322962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T11:52:32.32297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-17T11:52:32.32298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T11:52:32.322989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-17T11:52:32.324157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:52:32.32417Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-401285 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T11:52:32.324182Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:52:32.32444Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T11:52:32.32449Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T11:52:32.32536Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-17T11:52:32.325788Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:53:28 up  5:35,  0 user,  load average: 1.68, 2.39, 1.83
	Linux old-k8s-version-401285 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [70af8698b90248c8d07b9effdf600c38738c38914120182e85c36977d7916bf2] <==
	I1217 11:52:34.120054       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:52:34.120306       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 11:52:34.120485       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:52:34.120506       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:52:34.120560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:52:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:52:34.418061       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:52:34.418108       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:52:34.418121       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:52:34.516930       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:52:34.804748       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:52:34.804782       1 metrics.go:72] Registering metrics
	I1217 11:52:34.804854       1 controller.go:711] "Syncing nftables rules"
	I1217 11:52:44.417821       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:52:44.417895       1 main.go:301] handling current node
	I1217 11:52:54.418671       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:52:54.418725       1 main.go:301] handling current node
	I1217 11:53:04.418472       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:53:04.418518       1 main.go:301] handling current node
	I1217 11:53:14.419435       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:53:14.419477       1 main.go:301] handling current node
	I1217 11:53:24.425319       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:53:24.425361       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f15dd64ca827bde9a31635cadaed80039200397a5a70c03ee468cf1952c4c87] <==
	I1217 11:52:33.410299       1 aggregator.go:166] initial CRD sync complete...
	I1217 11:52:33.410313       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 11:52:33.410334       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:52:33.410342       1 cache.go:39] Caches are synced for autoregister controller
	I1217 11:52:33.410443       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1217 11:52:33.410585       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I1217 11:52:33.413079       1 shared_informer.go:318] Caches are synced for configmaps
	I1217 11:52:33.447136       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1217 11:52:34.239601       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 11:52:34.272818       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 11:52:34.290150       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:52:34.297186       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:52:34.305134       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 11:52:34.313333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:52:34.346099       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.14.142"}
	I1217 11:52:34.370492       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.122.9"}
	E1217 11:52:43.411327       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I1217 11:52:45.493543       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:52:45.559123       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1217 11:52:45.559123       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1217 11:52:45.647936       1 controller.go:624] quota admission added evaluator for: endpoints
	E1217 11:52:53.411953       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1217 11:53:03.412935       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1217 11:53:13.413388       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1217 11:53:23.414206       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [2f758407de6a5197364df61528ddb100122e25164fa97424a91e1cfbf63d5b32] <==
	I1217 11:52:45.755582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="194.049989ms"
	I1217 11:52:45.755772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.566µs"
	I1217 11:52:45.756906       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-prh7v"
	I1217 11:52:45.756936       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-klmw2"
	I1217 11:52:45.764595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="201.466345ms"
	I1217 11:52:45.764876       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 11:52:45.765128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="203.523531ms"
	I1217 11:52:45.772380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.154507ms"
	I1217 11:52:45.772427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.786186ms"
	I1217 11:52:45.772597       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.575µs"
	I1217 11:52:45.772616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="116.415µs"
	I1217 11:52:45.774217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.669µs"
	I1217 11:52:45.783840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.211µs"
	I1217 11:52:46.084856       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 11:52:46.163243       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 11:52:46.163280       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 11:52:48.629934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.836µs"
	I1217 11:52:49.639318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="196.038µs"
	I1217 11:52:50.641683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="104.906µs"
	I1217 11:52:52.650834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.613501ms"
	I1217 11:52:52.650954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="70.062µs"
	I1217 11:53:11.700689       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.325µs"
	I1217 11:53:12.052301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.403072ms"
	I1217 11:53:12.052410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.153µs"
	I1217 11:53:16.076150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.461µs"
	
	
	==> kube-proxy [3a799da8c577475c5da3a3846bd74a1474d4f4d9552c749aa00155b4a2b65fd9] <==
	I1217 11:52:33.964455       1 server_others.go:69] "Using iptables proxy"
	I1217 11:52:33.975846       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1217 11:52:33.995979       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:52:33.999208       1 server_others.go:152] "Using iptables Proxier"
	I1217 11:52:33.999247       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 11:52:33.999253       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 11:52:33.999284       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 11:52:33.999478       1 server.go:846] "Version info" version="v1.28.0"
	I1217 11:52:33.999518       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:52:34.000230       1 config.go:188] "Starting service config controller"
	I1217 11:52:34.000249       1 config.go:97] "Starting endpoint slice config controller"
	I1217 11:52:34.000266       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 11:52:34.000296       1 config.go:315] "Starting node config controller"
	I1217 11:52:34.000335       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 11:52:34.000266       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 11:52:34.101240       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 11:52:34.101271       1 shared_informer.go:318] Caches are synced for node config
	I1217 11:52:34.101303       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [9aa49d40045e1e67467ef562959460e7790cb28aff33d0ead73eb299efd0348c] <==
	I1217 11:52:31.633777       1 serving.go:348] Generated self-signed cert in-memory
	I1217 11:52:33.378013       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1217 11:52:33.378035       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:52:33.381756       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1217 11:52:33.381781       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1217 11:52:33.381785       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:52:33.381817       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1217 11:52:33.381843       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 11:52:33.381865       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1217 11:52:33.382692       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1217 11:52:33.382790       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1217 11:52:33.482229       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1217 11:52:33.482276       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1217 11:52:33.482268       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 11:52:45 old-k8s-version-401285 kubelet[772]: I1217 11:52:45.817825     772 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2bx\" (UniqueName: \"kubernetes.io/projected/ad3e2463-5388-453d-8fe6-25428420edfd-kube-api-access-nn2bx\") pod \"kubernetes-dashboard-8694d4445c-klmw2\" (UID: \"ad3e2463-5388-453d-8fe6-25428420edfd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-klmw2"
	Dec 17 11:52:45 old-k8s-version-401285 kubelet[772]: I1217 11:52:45.817873     772 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ad3e2463-5388-453d-8fe6-25428420edfd-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-klmw2\" (UID: \"ad3e2463-5388-453d-8fe6-25428420edfd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-klmw2"
	Dec 17 11:52:45 old-k8s-version-401285 kubelet[772]: I1217 11:52:45.817896     772 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zxhq\" (UniqueName: \"kubernetes.io/projected/f99f24f7-0927-4395-9cfd-e0b94f087da2-kube-api-access-7zxhq\") pod \"dashboard-metrics-scraper-5f989dc9cf-prh7v\" (UID: \"f99f24f7-0927-4395-9cfd-e0b94f087da2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v"
	Dec 17 11:52:45 old-k8s-version-401285 kubelet[772]: I1217 11:52:45.817977     772 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f99f24f7-0927-4395-9cfd-e0b94f087da2-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-prh7v\" (UID: \"f99f24f7-0927-4395-9cfd-e0b94f087da2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v"
	Dec 17 11:52:48 old-k8s-version-401285 kubelet[772]: I1217 11:52:48.618585     772 scope.go:117] "RemoveContainer" containerID="5ab96aa43f9a2fe730bebb3eef8edf1be0219da2566c0c234e28edaaf5721925"
	Dec 17 11:52:49 old-k8s-version-401285 kubelet[772]: I1217 11:52:49.624202     772 scope.go:117] "RemoveContainer" containerID="5ab96aa43f9a2fe730bebb3eef8edf1be0219da2566c0c234e28edaaf5721925"
	Dec 17 11:52:49 old-k8s-version-401285 kubelet[772]: I1217 11:52:49.624582     772 scope.go:117] "RemoveContainer" containerID="531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7"
	Dec 17 11:52:49 old-k8s-version-401285 kubelet[772]: E1217 11:52:49.625158     772 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-prh7v_kubernetes-dashboard(f99f24f7-0927-4395-9cfd-e0b94f087da2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v" podUID="f99f24f7-0927-4395-9cfd-e0b94f087da2"
	Dec 17 11:52:50 old-k8s-version-401285 kubelet[772]: I1217 11:52:50.628607     772 scope.go:117] "RemoveContainer" containerID="531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7"
	Dec 17 11:52:50 old-k8s-version-401285 kubelet[772]: E1217 11:52:50.628951     772 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-prh7v_kubernetes-dashboard(f99f24f7-0927-4395-9cfd-e0b94f087da2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v" podUID="f99f24f7-0927-4395-9cfd-e0b94f087da2"
	Dec 17 11:52:52 old-k8s-version-401285 kubelet[772]: I1217 11:52:52.645318     772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-klmw2" podStartSLOduration=1.731988817 podCreationTimestamp="2025-12-17 11:52:45 +0000 UTC" firstStartedPulling="2025-12-17 11:52:46.088793227 +0000 UTC m=+15.640512990" lastFinishedPulling="2025-12-17 11:52:52.002045828 +0000 UTC m=+21.553765589" observedRunningTime="2025-12-17 11:52:52.644869294 +0000 UTC m=+22.196589064" watchObservedRunningTime="2025-12-17 11:52:52.645241416 +0000 UTC m=+22.196961186"
	Dec 17 11:52:56 old-k8s-version-401285 kubelet[772]: I1217 11:52:56.066585     772 scope.go:117] "RemoveContainer" containerID="531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7"
	Dec 17 11:52:56 old-k8s-version-401285 kubelet[772]: E1217 11:52:56.066860     772 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-prh7v_kubernetes-dashboard(f99f24f7-0927-4395-9cfd-e0b94f087da2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v" podUID="f99f24f7-0927-4395-9cfd-e0b94f087da2"
	Dec 17 11:53:04 old-k8s-version-401285 kubelet[772]: I1217 11:53:04.666758     772 scope.go:117] "RemoveContainer" containerID="cde52efb575362eca44dd3923d8c68b38b0d426bac72946a99c7c44ff4812dcb"
	Dec 17 11:53:11 old-k8s-version-401285 kubelet[772]: I1217 11:53:11.547670     772 scope.go:117] "RemoveContainer" containerID="531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7"
	Dec 17 11:53:11 old-k8s-version-401285 kubelet[772]: I1217 11:53:11.687932     772 scope.go:117] "RemoveContainer" containerID="531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7"
	Dec 17 11:53:11 old-k8s-version-401285 kubelet[772]: I1217 11:53:11.688149     772 scope.go:117] "RemoveContainer" containerID="792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b"
	Dec 17 11:53:11 old-k8s-version-401285 kubelet[772]: E1217 11:53:11.688522     772 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-prh7v_kubernetes-dashboard(f99f24f7-0927-4395-9cfd-e0b94f087da2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v" podUID="f99f24f7-0927-4395-9cfd-e0b94f087da2"
	Dec 17 11:53:16 old-k8s-version-401285 kubelet[772]: I1217 11:53:16.066313     772 scope.go:117] "RemoveContainer" containerID="792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b"
	Dec 17 11:53:16 old-k8s-version-401285 kubelet[772]: E1217 11:53:16.066787     772 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-prh7v_kubernetes-dashboard(f99f24f7-0927-4395-9cfd-e0b94f087da2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v" podUID="f99f24f7-0927-4395-9cfd-e0b94f087da2"
	Dec 17 11:53:26 old-k8s-version-401285 kubelet[772]: I1217 11:53:26.162339     772 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 17 11:53:26 old-k8s-version-401285 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:53:26 old-k8s-version-401285 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:53:26 old-k8s-version-401285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:26 old-k8s-version-401285 systemd[1]: kubelet.service: Consumed 1.614s CPU time.
	
	
	==> kubernetes-dashboard [7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e] <==
	2025/12/17 11:52:52 Starting overwatch
	2025/12/17 11:52:52 Using namespace: kubernetes-dashboard
	2025/12/17 11:52:52 Using in-cluster config to connect to apiserver
	2025/12/17 11:52:52 Using secret token for csrf signing
	2025/12/17 11:52:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 11:52:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 11:52:52 Successful initial request to the apiserver, version: v1.28.0
	2025/12/17 11:52:52 Generating JWE encryption key
	2025/12/17 11:52:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 11:52:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 11:52:52 Initializing JWE encryption key from synchronized object
	2025/12/17 11:52:52 Creating in-cluster Sidecar client
	2025/12/17 11:52:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 11:52:52 Serving insecurely on HTTP port: 9090
	2025/12/17 11:53:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60] <==
	I1217 11:53:04.720755       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:53:04.729245       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:53:04.729289       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 11:53:22.123260       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:53:22.123326       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca339107-09fc-488a-9de1-8033a0f945ef", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-401285_d566a4c7-e57e-4936-899e-6e544d6682fb became leader
	I1217 11:53:22.123409       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-401285_d566a4c7-e57e-4936-899e-6e544d6682fb!
	I1217 11:53:22.223644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-401285_d566a4c7-e57e-4936-899e-6e544d6682fb!
	
	
	==> storage-provisioner [cde52efb575362eca44dd3923d8c68b38b0d426bac72946a99c7c44ff4812dcb] <==
	I1217 11:52:33.924395       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 11:53:03.927909       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-401285 -n old-k8s-version-401285
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-401285 -n old-k8s-version-401285: exit status 2 (396.37663ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-401285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-401285
helpers_test.go:244: (dbg) docker inspect old-k8s-version-401285:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0",
	        "Created": "2025-12-17T11:51:14.16613837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1929282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:52:24.041685697Z",
	            "FinishedAt": "2025-12-17T11:52:23.118301553Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/hosts",
	        "LogPath": "/var/lib/docker/containers/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0/2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0-json.log",
	        "Name": "/old-k8s-version-401285",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-401285:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-401285",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cc7fce2754ba723f2f20f4adae46a2cad87962d985de321644a87dacc624cc0",
	                "LowerDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9f32e5331b2e830b4573e7c0c1b32c482e97d2a5bf30d67aff242559a36ab519/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-401285",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-401285/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-401285",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-401285",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-401285",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c64693db121db6266350b6c40d5c57ea3cee68050fb3bfd208af900fde02e4b0",
	            "SandboxKey": "/var/run/docker/netns/c64693db121d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34596"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34597"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34600"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34598"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34599"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-401285": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "28c236790a3f61d93a940c5e5d3e7f4dd4932eb2cb6dabba52c6ea762e486410",
	                    "EndpointID": "fde3e80a0389686ab1d9afb8f29e3d1f88c77a7e4381cfc038b09ba62991aec9",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "12:35:2a:52:6d:d6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-401285",
	                        "2cc7fce2754b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-401285 -n old-k8s-version-401285
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-401285 -n old-k8s-version-401285: exit status 2 (397.169243ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-401285 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-401285 logs -n 25: (1.389840538s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-213935 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo containerd config dump                                                                                                                                                                                                  │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ ssh     │ -p cilium-213935 sudo crio config                                                                                                                                                                                                             │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ delete  │ -p cilium-213935                                                                                                                                                                                                                              │ cilium-213935             │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p force-systemd-flag-881315 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-881315 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ ssh     │ force-systemd-flag-881315 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-881315 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p force-systemd-flag-881315                                                                                                                                                                                                                  │ force-systemd-flag-881315 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p cert-options-714247 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:51 UTC │
	│ ssh     │ cert-options-714247 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ ssh     │ -p cert-options-714247 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ delete  │ -p cert-options-714247                                                                                                                                                                                                                        │ cert-options-714247       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ start   │ -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-401285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │                     │
	│ stop    │ -p old-k8s-version-401285 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-401285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:52 UTC │
	│ start   │ -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p cert-expiration-067996 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-067996    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p cert-expiration-067996                                                                                                                                                                                                                     │ cert-expiration-067996    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-737478         │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ image   │ old-k8s-version-401285 image list --format=json                                                                                                                                                                                               │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ pause   │ -p old-k8s-version-401285 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-401285    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:53:24
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:53:24.474551 1938284 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:53:24.474851 1938284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:53:24.474862 1938284 out.go:374] Setting ErrFile to fd 2...
	I1217 11:53:24.474866 1938284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:53:24.475097 1938284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:53:24.475709 1938284 out.go:368] Setting JSON to false
	I1217 11:53:24.476899 1938284 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20149,"bootTime":1765952255,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:53:24.476960 1938284 start.go:143] virtualization: kvm guest
	I1217 11:53:24.478995 1938284 out.go:179] * [no-preload-737478] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:53:24.480393 1938284 notify.go:221] Checking for updates...
	I1217 11:53:24.480406 1938284 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:53:24.481956 1938284 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:53:24.483348 1938284 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:53:24.484555 1938284 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:53:24.485712 1938284 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:53:24.487470 1938284 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:53:24.489363 1938284 config.go:182] Loaded profile config "kubernetes-upgrade-556754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:53:24.489599 1938284 config.go:182] Loaded profile config "old-k8s-version-401285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 11:53:24.489772 1938284 config.go:182] Loaded profile config "stopped-upgrade-287611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 11:53:24.489917 1938284 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:53:24.519114 1938284 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:53:24.519364 1938284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:53:24.581135 1938284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 11:53:24.570422169 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:53:24.581248 1938284 docker.go:319] overlay module found
	I1217 11:53:24.582837 1938284 out.go:179] * Using the docker driver based on user configuration
	I1217 11:53:24.584192 1938284 start.go:309] selected driver: docker
	I1217 11:53:24.584208 1938284 start.go:927] validating driver "docker" against <nil>
	I1217 11:53:24.584220 1938284 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:53:24.584852 1938284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:53:24.642856 1938284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 11:53:24.632629573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:53:24.643034 1938284 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:53:24.643243 1938284 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:53:24.645086 1938284 out.go:179] * Using Docker driver with root privileges
	I1217 11:53:24.646327 1938284 cni.go:84] Creating CNI manager for ""
	I1217 11:53:24.646401 1938284 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:53:24.646425 1938284 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:53:24.646528 1938284 start.go:353] cluster config:
	{Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:53:24.647741 1938284 out.go:179] * Starting "no-preload-737478" primary control-plane node in "no-preload-737478" cluster
	I1217 11:53:24.649161 1938284 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:53:24.650424 1938284 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:53:24.652095 1938284 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:53:24.652201 1938284 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:53:24.652214 1938284 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/no-preload-737478/config.json ...
	I1217 11:53:24.652293 1938284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/no-preload-737478/config.json: {Name:mka67a5019c34bf5eb14f70d8ded95908609ca6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:53:24.652365 1938284 cache.go:107] acquiring lock: {Name:mkce365350b466caa625a853fa04d355dafaf737 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652365 1938284 cache.go:107] acquiring lock: {Name:mkb34fd803350485ad0146dad2d5e5975c7a1fbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652401 1938284 cache.go:107] acquiring lock: {Name:mk6a07e7ceeb8fe04825f0802eeaaeeee4c06443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652475 1938284 cache.go:107] acquiring lock: {Name:mk195f08cb3604d752263934a40f27bac4021dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652498 1938284 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 11:53:24.652479 1938284 cache.go:107] acquiring lock: {Name:mk69f66d091b3517cc19ba9a659d980495d072d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652507 1938284 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:24.652482 1938284 cache.go:107] acquiring lock: {Name:mka9f0fd2d6e879a6d51520f3e35096f83561a39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652519 1938284 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 163.44µs
	I1217 11:53:24.652563 1938284 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 11:53:24.652584 1938284 cache.go:107] acquiring lock: {Name:mka6d3f4b4fc66993c428fbcff6e92cde119967c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652579 1938284 cache.go:107] acquiring lock: {Name:mk9b11255ca4aa317635277ae364f17e3f34e430 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.652605 1938284 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:24.652726 1938284 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:24.652739 1938284 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:53:24.652765 1938284 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:53:24.652847 1938284 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1217 11:53:24.652870 1938284 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 458.565µs
	I1217 11:53:24.652887 1938284 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 11:53:24.652890 1938284 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:53:24.653996 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:24.654073 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:53:24.654100 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:24.654100 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:53:24.654148 1938284 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:53:24.654488 1938284 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:24.677423 1938284 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:53:24.677443 1938284 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:53:24.677461 1938284 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:53:24.677501 1938284 start.go:360] acquireMachinesLock for no-preload-737478: {Name:mk1ef5e7ed91896001178c3ee81911e4005528d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:53:24.677645 1938284 start.go:364] duration metric: took 118.012µs to acquireMachinesLock for "no-preload-737478"
	I1217 11:53:24.677679 1938284 start.go:93] Provisioning new machine with config: &{Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:53:24.677762 1938284 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:53:20.577768 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:53:20.578133 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:53:20.578181 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:20.578221 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:20.617463 1894629 cri.go:89] found id: "f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653"
	I1217 11:53:20.617485 1894629 cri.go:89] found id: ""
	I1217 11:53:20.617493 1894629 logs.go:282] 1 containers: [f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653]
	I1217 11:53:20.617554 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:20.621389 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:53:20.621457 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:20.658964 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:53:20.658990 1894629 cri.go:89] found id: ""
	I1217 11:53:20.659001 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:53:20.659058 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:20.663214 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:53:20.663299 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:20.706508 1894629 cri.go:89] found id: ""
	I1217 11:53:20.706551 1894629 logs.go:282] 0 containers: []
	W1217 11:53:20.706563 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:53:20.706573 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:20.706630 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:20.749845 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:53:20.749866 1894629 cri.go:89] found id: ""
	I1217 11:53:20.749875 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:53:20.749920 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:20.754080 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:20.754139 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:20.794709 1894629 cri.go:89] found id: ""
	I1217 11:53:20.794737 1894629 logs.go:282] 0 containers: []
	W1217 11:53:20.794749 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:20.794758 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:20.794818 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:20.838328 1894629 cri.go:89] found id: "56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add"
	I1217 11:53:20.838367 1894629 cri.go:89] found id: ""
	I1217 11:53:20.838380 1894629 logs.go:282] 1 containers: [56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add]
	I1217 11:53:20.838442 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:20.842792 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:20.842870 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:20.883568 1894629 cri.go:89] found id: ""
	I1217 11:53:20.883599 1894629 logs.go:282] 0 containers: []
	W1217 11:53:20.883613 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:20.883621 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:53:20.883688 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:53:20.927792 1894629 cri.go:89] found id: ""
	I1217 11:53:20.927819 1894629 logs.go:282] 0 containers: []
	W1217 11:53:20.927831 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:53:20.927849 1894629 logs.go:123] Gathering logs for kube-controller-manager [56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add] ...
	I1217 11:53:20.927865 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add"
	I1217 11:53:20.971000 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:53:20.971038 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:53:21.022023 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:53:21.022054 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:53:21.065665 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:21.065701 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:21.182619 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:21.182662 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:21.208183 1894629 logs.go:123] Gathering logs for kube-apiserver [f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653] ...
	I1217 11:53:21.208218 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653"
	I1217 11:53:21.266791 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:53:21.266837 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:53:21.320224 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:53:21.320271 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:53:21.419307 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:21.419360 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:21.491253 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:23.991418 1894629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:53:23.991936 1894629 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:53:23.991999 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:23.992066 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:24.035187 1894629 cri.go:89] found id: "f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653"
	I1217 11:53:24.035212 1894629 cri.go:89] found id: ""
	I1217 11:53:24.035223 1894629 logs.go:282] 1 containers: [f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653]
	I1217 11:53:24.035279 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:24.040063 1894629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:53:24.040139 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:24.080627 1894629 cri.go:89] found id: "77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:53:24.080657 1894629 cri.go:89] found id: ""
	I1217 11:53:24.080674 1894629 logs.go:282] 1 containers: [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506]
	I1217 11:53:24.080738 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:24.085088 1894629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:53:24.085159 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:24.124655 1894629 cri.go:89] found id: ""
	I1217 11:53:24.124687 1894629 logs.go:282] 0 containers: []
	W1217 11:53:24.124699 1894629 logs.go:284] No container was found matching "coredns"
	I1217 11:53:24.124707 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:24.124765 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:24.170576 1894629 cri.go:89] found id: "e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:53:24.170603 1894629 cri.go:89] found id: ""
	I1217 11:53:24.170613 1894629 logs.go:282] 1 containers: [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3]
	I1217 11:53:24.170682 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:24.175272 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:24.175338 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:24.218146 1894629 cri.go:89] found id: ""
	I1217 11:53:24.218176 1894629 logs.go:282] 0 containers: []
	W1217 11:53:24.218189 1894629 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:24.218202 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:24.218280 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:24.258655 1894629 cri.go:89] found id: "56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add"
	I1217 11:53:24.258683 1894629 cri.go:89] found id: ""
	I1217 11:53:24.258693 1894629 logs.go:282] 1 containers: [56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add]
	I1217 11:53:24.258757 1894629 ssh_runner.go:195] Run: which crictl
	I1217 11:53:24.262986 1894629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:24.263050 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:24.301355 1894629 cri.go:89] found id: ""
	I1217 11:53:24.301392 1894629 logs.go:282] 0 containers: []
	W1217 11:53:24.301405 1894629 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:24.301423 1894629 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:53:24.301485 1894629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:53:24.343269 1894629 cri.go:89] found id: ""
	I1217 11:53:24.343298 1894629 logs.go:282] 0 containers: []
	W1217 11:53:24.343309 1894629 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:53:24.343325 1894629 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:24.343341 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:24.470651 1894629 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:24.470689 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:24.491814 1894629 logs.go:123] Gathering logs for kube-scheduler [e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3] ...
	I1217 11:53:24.491840 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e452a405d207b7fe836aba25665c45a659654d77ddfc78940cb8ed1070601ad3"
	I1217 11:53:24.581166 1894629 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:53:24.581199 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:53:24.631244 1894629 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:24.631284 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:24.703617 1894629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:24.703643 1894629 logs.go:123] Gathering logs for kube-apiserver [f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653] ...
	I1217 11:53:24.703660 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9e52172a657859589fcecf9d8c58c40685ab59df677e3f69eb946ccd113d653"
	I1217 11:53:24.749309 1894629 logs.go:123] Gathering logs for etcd [77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506] ...
	I1217 11:53:24.749336 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77387d4a57f6b326ec25fc4d9fe395e3056a0ebf6953b8d7f6a6eebb5d7ad506"
	I1217 11:53:24.798937 1894629 logs.go:123] Gathering logs for kube-controller-manager [56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add] ...
	I1217 11:53:24.798977 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56776e76527010a74ce0585d66207718322163ac471efe4fe571692f39c58add"
	I1217 11:53:24.840700 1894629 logs.go:123] Gathering logs for container status ...
	I1217 11:53:24.840728 1894629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:53:23.824466 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:53:23.824941 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:53:23.825006 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:23.825166 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:23.855629 1888817 cri.go:89] found id: "927bf469232ecfdec90221149dac81fbfde6a7c8f3eb8ac9b229a2387e2e16f0"
	I1217 11:53:23.855660 1888817 cri.go:89] found id: ""
	I1217 11:53:23.855672 1888817 logs.go:282] 1 containers: [927bf469232ecfdec90221149dac81fbfde6a7c8f3eb8ac9b229a2387e2e16f0]
	I1217 11:53:23.855739 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:53:23.860257 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:53:23.860362 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:23.890006 1888817 cri.go:89] found id: ""
	I1217 11:53:23.890035 1888817 logs.go:282] 0 containers: []
	W1217 11:53:23.890047 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:53:23.890054 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:53:23.890111 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:23.920712 1888817 cri.go:89] found id: ""
	I1217 11:53:23.920739 1888817 logs.go:282] 0 containers: []
	W1217 11:53:23.920747 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:53:23.920753 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:23.920810 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:23.949173 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:53:23.949196 1888817 cri.go:89] found id: ""
	I1217 11:53:23.949208 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:53:23.949278 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:53:23.953377 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:23.953461 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:23.982691 1888817 cri.go:89] found id: ""
	I1217 11:53:23.982715 1888817 logs.go:282] 0 containers: []
	W1217 11:53:23.982725 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:23.982734 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:23.982790 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:24.013460 1888817 cri.go:89] found id: "c44889cf14930a7cf5ebab22461000c67cd28d865770f2cede5f591f8dc38a0c"
	I1217 11:53:24.013490 1888817 cri.go:89] found id: ""
	I1217 11:53:24.013504 1888817 logs.go:282] 1 containers: [c44889cf14930a7cf5ebab22461000c67cd28d865770f2cede5f591f8dc38a0c]
	I1217 11:53:24.013584 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:53:24.017918 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:24.017987 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:24.050469 1888817 cri.go:89] found id: ""
	I1217 11:53:24.050500 1888817 logs.go:282] 0 containers: []
	W1217 11:53:24.050512 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:24.050520 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:53:24.050600 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:53:24.083266 1888817 cri.go:89] found id: ""
	I1217 11:53:24.083296 1888817 logs.go:282] 0 containers: []
	W1217 11:53:24.083306 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:53:24.083319 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:53:24.083333 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:53:24.120612 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:24.120657 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:24.233820 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:24.233853 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:24.266842 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:24.266875 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:24.329104 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:24.329124 1888817 logs.go:123] Gathering logs for kube-apiserver [927bf469232ecfdec90221149dac81fbfde6a7c8f3eb8ac9b229a2387e2e16f0] ...
	I1217 11:53:24.329136 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 927bf469232ecfdec90221149dac81fbfde6a7c8f3eb8ac9b229a2387e2e16f0"
	I1217 11:53:24.366787 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:53:24.366819 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:53:24.396205 1888817 logs.go:123] Gathering logs for kube-controller-manager [c44889cf14930a7cf5ebab22461000c67cd28d865770f2cede5f591f8dc38a0c] ...
	I1217 11:53:24.396233 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c44889cf14930a7cf5ebab22461000c67cd28d865770f2cede5f591f8dc38a0c"
	I1217 11:53:24.425995 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:53:24.426023 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:53:26.984594 1888817 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 11:53:26.985013 1888817 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 11:53:26.985070 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:26.985117 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:27.014593 1888817 cri.go:89] found id: "927bf469232ecfdec90221149dac81fbfde6a7c8f3eb8ac9b229a2387e2e16f0"
	I1217 11:53:27.014622 1888817 cri.go:89] found id: ""
	I1217 11:53:27.014634 1888817 logs.go:282] 1 containers: [927bf469232ecfdec90221149dac81fbfde6a7c8f3eb8ac9b229a2387e2e16f0]
	I1217 11:53:27.014701 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:53:27.019113 1888817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 11:53:27.019174 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:27.048824 1888817 cri.go:89] found id: ""
	I1217 11:53:27.048856 1888817 logs.go:282] 0 containers: []
	W1217 11:53:27.048867 1888817 logs.go:284] No container was found matching "etcd"
	I1217 11:53:27.048876 1888817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 11:53:27.048945 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:27.078890 1888817 cri.go:89] found id: ""
	I1217 11:53:27.078918 1888817 logs.go:282] 0 containers: []
	W1217 11:53:27.078926 1888817 logs.go:284] No container was found matching "coredns"
	I1217 11:53:27.078932 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:27.078994 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:27.107460 1888817 cri.go:89] found id: "bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:53:27.107481 1888817 cri.go:89] found id: ""
	I1217 11:53:27.107489 1888817 logs.go:282] 1 containers: [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20]
	I1217 11:53:27.107592 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:53:27.112190 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:27.112260 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:27.142111 1888817 cri.go:89] found id: ""
	I1217 11:53:27.142138 1888817 logs.go:282] 0 containers: []
	W1217 11:53:27.142148 1888817 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:27.142156 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:27.142216 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:27.171596 1888817 cri.go:89] found id: "c44889cf14930a7cf5ebab22461000c67cd28d865770f2cede5f591f8dc38a0c"
	I1217 11:53:27.171622 1888817 cri.go:89] found id: ""
	I1217 11:53:27.171633 1888817 logs.go:282] 1 containers: [c44889cf14930a7cf5ebab22461000c67cd28d865770f2cede5f591f8dc38a0c]
	I1217 11:53:27.171690 1888817 ssh_runner.go:195] Run: which crictl
	I1217 11:53:27.176043 1888817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:27.176102 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:27.203405 1888817 cri.go:89] found id: ""
	I1217 11:53:27.203430 1888817 logs.go:282] 0 containers: []
	W1217 11:53:27.203437 1888817 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:27.203444 1888817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:53:27.203497 1888817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:53:27.233156 1888817 cri.go:89] found id: ""
	I1217 11:53:27.233188 1888817 logs.go:282] 0 containers: []
	W1217 11:53:27.233199 1888817 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:53:27.233211 1888817 logs.go:123] Gathering logs for CRI-O ...
	I1217 11:53:27.233228 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 11:53:27.289409 1888817 logs.go:123] Gathering logs for container status ...
	I1217 11:53:27.289444 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:53:27.323396 1888817 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:27.323416 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:27.444673 1888817 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:27.444708 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:27.469517 1888817 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:27.469566 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:27.543039 1888817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:27.543070 1888817 logs.go:123] Gathering logs for kube-apiserver [927bf469232ecfdec90221149dac81fbfde6a7c8f3eb8ac9b229a2387e2e16f0] ...
	I1217 11:53:27.543088 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 927bf469232ecfdec90221149dac81fbfde6a7c8f3eb8ac9b229a2387e2e16f0"
	I1217 11:53:27.581597 1888817 logs.go:123] Gathering logs for kube-scheduler [bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20] ...
	I1217 11:53:27.581632 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bad346e21352888d907c3fc4a48fc225d6b56af32e650efaff1d09fbceafbd20"
	I1217 11:53:27.617947 1888817 logs.go:123] Gathering logs for kube-controller-manager [c44889cf14930a7cf5ebab22461000c67cd28d865770f2cede5f591f8dc38a0c] ...
	I1217 11:53:27.617979 1888817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c44889cf14930a7cf5ebab22461000c67cd28d865770f2cede5f591f8dc38a0c"
	I1217 11:53:24.680033 1938284 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:53:24.680330 1938284 start.go:159] libmachine.API.Create for "no-preload-737478" (driver="docker")
	I1217 11:53:24.680366 1938284 client.go:173] LocalClient.Create starting
	I1217 11:53:24.680448 1938284 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem
	I1217 11:53:24.680492 1938284 main.go:143] libmachine: Decoding PEM data...
	I1217 11:53:24.680523 1938284 main.go:143] libmachine: Parsing certificate...
	I1217 11:53:24.680611 1938284 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem
	I1217 11:53:24.680640 1938284 main.go:143] libmachine: Decoding PEM data...
	I1217 11:53:24.680653 1938284 main.go:143] libmachine: Parsing certificate...
	I1217 11:53:24.681135 1938284 cli_runner.go:164] Run: docker network inspect no-preload-737478 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:53:24.702294 1938284 cli_runner.go:211] docker network inspect no-preload-737478 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:53:24.702403 1938284 network_create.go:284] running [docker network inspect no-preload-737478] to gather additional debugging logs...
	I1217 11:53:24.702434 1938284 cli_runner.go:164] Run: docker network inspect no-preload-737478
	W1217 11:53:24.720971 1938284 cli_runner.go:211] docker network inspect no-preload-737478 returned with exit code 1
	I1217 11:53:24.721011 1938284 network_create.go:287] error running [docker network inspect no-preload-737478]: docker network inspect no-preload-737478: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-737478 not found
	I1217 11:53:24.721027 1938284 network_create.go:289] output of [docker network inspect no-preload-737478]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-737478 not found
	
	** /stderr **
	I1217 11:53:24.721121 1938284 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:53:24.741573 1938284 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3d92c06bf7e1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:dc:f5:1a:95:c6} reservation:<nil>}
	I1217 11:53:24.742383 1938284 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e34a3db6b97 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:b3:69:9a:9a:9f} reservation:<nil>}
	I1217 11:53:24.743023 1938284 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d8460370d724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:bb:68:9a:9d:ac} reservation:<nil>}
	I1217 11:53:24.743474 1938284 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cb66266d333d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:79:2f:64:02:df} reservation:<nil>}
	I1217 11:53:24.743868 1938284 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0f9b0e663d9b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:b0:7e:78:0f:69} reservation:<nil>}
	I1217 11:53:24.744314 1938284 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-28c236790a3f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:7a:f8:15:2d:13:ea} reservation:<nil>}
	I1217 11:53:24.745020 1938284 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020128b0}
	I1217 11:53:24.745053 1938284 network_create.go:124] attempt to create docker network no-preload-737478 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1217 11:53:24.745112 1938284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-737478 no-preload-737478
	I1217 11:53:24.803437 1938284 network_create.go:108] docker network no-preload-737478 192.168.103.0/24 created
	I1217 11:53:24.803478 1938284 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-737478" container
	I1217 11:53:24.803579 1938284 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:53:24.823388 1938284 cli_runner.go:164] Run: docker volume create no-preload-737478 --label name.minikube.sigs.k8s.io=no-preload-737478 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:53:24.841724 1938284 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 11:53:24.846345 1938284 oci.go:103] Successfully created a docker volume no-preload-737478
	I1217 11:53:24.846451 1938284 cli_runner.go:164] Run: docker run --rm --name no-preload-737478-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-737478 --entrypoint /usr/bin/test -v no-preload-737478:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:53:24.847674 1938284 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1217 11:53:24.881228 1938284 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1217 11:53:24.925038 1938284 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1217 11:53:24.970363 1938284 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1217 11:53:24.997529 1938284 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1217 11:53:25.281820 1938284 oci.go:107] Successfully prepared a docker volume no-preload-737478
	I1217 11:53:25.281885 1938284 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	W1217 11:53:25.281984 1938284 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 11:53:25.282025 1938284 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 11:53:25.282069 1938284 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:53:25.330111 1938284 cache.go:157] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1217 11:53:25.330147 1938284 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 677.63265ms
	I1217 11:53:25.330169 1938284 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1217 11:53:25.346852 1938284 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-737478 --name no-preload-737478 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-737478 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-737478 --network no-preload-737478 --ip 192.168.103.2 --volume no-preload-737478:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:53:25.647342 1938284 cli_runner.go:164] Run: docker container inspect no-preload-737478 --format={{.State.Running}}
	I1217 11:53:25.671481 1938284 cli_runner.go:164] Run: docker container inspect no-preload-737478 --format={{.State.Status}}
	I1217 11:53:25.692602 1938284 cli_runner.go:164] Run: docker exec no-preload-737478 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:53:25.747717 1938284 oci.go:144] the created container "no-preload-737478" has a running status.
	I1217 11:53:25.747748 1938284 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/no-preload-737478/id_rsa...
	I1217 11:53:25.864505 1938284 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/no-preload-737478/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 11:53:25.902194 1938284 cli_runner.go:164] Run: docker container inspect no-preload-737478 --format={{.State.Status}}
	I1217 11:53:25.925050 1938284 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:53:25.925074 1938284 kic_runner.go:114] Args: [docker exec --privileged no-preload-737478 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:53:26.006700 1938284 cli_runner.go:164] Run: docker container inspect no-preload-737478 --format={{.State.Status}}
	I1217 11:53:26.042764 1938284 machine.go:94] provisionDockerMachine start ...
	I1217 11:53:26.042877 1938284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:53:26.070085 1938284 main.go:143] libmachine: Using SSH client type: native
	I1217 11:53:26.070818 1938284 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34601 <nil> <nil>}
	I1217 11:53:26.070846 1938284 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:53:26.153652 1938284 cache.go:157] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1217 11:53:26.153689 1938284 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.501340984s
	I1217 11:53:26.153713 1938284 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1217 11:53:26.202366 1938284 cache.go:157] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1217 11:53:26.202400 1938284 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.550021331s
	I1217 11:53:26.202418 1938284 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1217 11:53:26.210074 1938284 cache.go:157] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1217 11:53:26.210100 1938284 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.557708538s
	I1217 11:53:26.210113 1938284 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1217 11:53:26.229582 1938284 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-737478
	
	I1217 11:53:26.229615 1938284 ubuntu.go:182] provisioning hostname "no-preload-737478"
	I1217 11:53:26.229681 1938284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:53:26.263324 1938284 main.go:143] libmachine: Using SSH client type: native
	I1217 11:53:26.263636 1938284 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34601 <nil> <nil>}
	I1217 11:53:26.263652 1938284 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-737478 && echo "no-preload-737478" | sudo tee /etc/hostname
	I1217 11:53:26.291959 1938284 cache.go:157] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1217 11:53:26.291994 1938284 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 1.639521862s
	I1217 11:53:26.292015 1938284 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1217 11:53:26.314768 1938284 cache.go:157] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 11:53:26.314796 1938284 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.662214785s
	I1217 11:53:26.314812 1938284 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 11:53:26.314832 1938284 cache.go:87] Successfully saved all images to host disk.
	I1217 11:53:26.409682 1938284 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-737478
	
	I1217 11:53:26.409766 1938284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:53:26.430696 1938284 main.go:143] libmachine: Using SSH client type: native
	I1217 11:53:26.430974 1938284 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34601 <nil> <nil>}
	I1217 11:53:26.430994 1938284 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-737478' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-737478/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-737478' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:53:26.559322 1938284 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:53:26.559354 1938284 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:53:26.559426 1938284 ubuntu.go:190] setting up certificates
	I1217 11:53:26.559446 1938284 provision.go:84] configureAuth start
	I1217 11:53:26.559511 1938284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-737478
	I1217 11:53:26.579939 1938284 provision.go:143] copyHostCerts
	I1217 11:53:26.579999 1938284 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:53:26.580013 1938284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:53:26.580077 1938284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:53:26.580182 1938284 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:53:26.580191 1938284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:53:26.580217 1938284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:53:26.580285 1938284 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:53:26.580292 1938284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:53:26.580315 1938284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:53:26.580389 1938284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.no-preload-737478 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-737478]
	I1217 11:53:26.627066 1938284 provision.go:177] copyRemoteCerts
	I1217 11:53:26.627148 1938284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:53:26.627212 1938284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:53:26.646475 1938284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/no-preload-737478/id_rsa Username:docker}
	I1217 11:53:26.740940 1938284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:53:26.763886 1938284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:53:26.784974 1938284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:53:26.805662 1938284 provision.go:87] duration metric: took 246.19843ms to configureAuth
	I1217 11:53:26.805695 1938284 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:53:26.805898 1938284 config.go:182] Loaded profile config "no-preload-737478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:53:26.806025 1938284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:53:26.825327 1938284 main.go:143] libmachine: Using SSH client type: native
	I1217 11:53:26.825671 1938284 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34601 <nil> <nil>}
	I1217 11:53:26.825696 1938284 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:53:27.113351 1938284 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:53:27.113382 1938284 machine.go:97] duration metric: took 1.070594371s to provisionDockerMachine
	I1217 11:53:27.113396 1938284 client.go:176] duration metric: took 2.433018529s to LocalClient.Create
	I1217 11:53:27.113423 1938284 start.go:167] duration metric: took 2.433094971s to libmachine.API.Create "no-preload-737478"
	I1217 11:53:27.113438 1938284 start.go:293] postStartSetup for "no-preload-737478" (driver="docker")
	I1217 11:53:27.113454 1938284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:53:27.113517 1938284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:53:27.113592 1938284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:53:27.132585 1938284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/no-preload-737478/id_rsa Username:docker}
	I1217 11:53:27.235002 1938284 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:53:27.239305 1938284 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:53:27.239346 1938284 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:53:27.239361 1938284 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:53:27.239432 1938284 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:53:27.239549 1938284 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:53:27.239693 1938284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:53:27.248225 1938284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:53:27.271344 1938284 start.go:296] duration metric: took 157.88099ms for postStartSetup
	I1217 11:53:27.271771 1938284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-737478
	I1217 11:53:27.291581 1938284 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/no-preload-737478/config.json ...
	I1217 11:53:27.291887 1938284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:53:27.291934 1938284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:53:27.312806 1938284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/no-preload-737478/id_rsa Username:docker}
	I1217 11:53:27.407596 1938284 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:53:27.412774 1938284 start.go:128] duration metric: took 2.734993704s to createHost
	I1217 11:53:27.412803 1938284 start.go:83] releasing machines lock for "no-preload-737478", held for 2.735140441s
	I1217 11:53:27.412926 1938284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-737478
	I1217 11:53:27.435470 1938284 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:53:27.435568 1938284 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:53:27.435591 1938284 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:53:27.435634 1938284 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:53:27.435673 1938284 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:53:27.435708 1938284 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:53:27.435770 1938284 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:53:27.435867 1938284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:53:27.435927 1938284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:53:27.458382 1938284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/no-preload-737478/id_rsa Username:docker}
	I1217 11:53:27.576854 1938284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:53:27.599738 1938284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:53:27.622730 1938284 ssh_runner.go:195] Run: openssl version
	I1217 11:53:27.630253 1938284 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:53:27.639715 1938284 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:53:27.649568 1938284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:53:27.655247 1938284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:53:27.655319 1938284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:53:27.700227 1938284 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:53:27.709188 1938284 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16729412.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:53:27.718515 1938284 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:53:27.728810 1938284 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:53:27.737403 1938284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:53:27.742417 1938284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:53:27.742484 1938284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:53:27.787938 1938284 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:53:27.797995 1938284 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:53:27.807669 1938284 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:53:27.816022 1938284 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:53:27.824225 1938284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:53:27.828807 1938284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:53:27.828865 1938284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:53:27.865956 1938284 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:53:27.874703 1938284 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1672941.pem /etc/ssl/certs/51391683.0
	I1217 11:53:27.882825 1938284 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:53:27.886704 1938284 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:53:27.890491 1938284 ssh_runner.go:195] Run: cat /version.json
	I1217 11:53:27.890582 1938284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:53:27.894478 1938284 ssh_runner.go:195] Run: systemctl --version
	I1217 11:53:27.959123 1938284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:53:28.000101 1938284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:53:28.005162 1938284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:53:28.005223 1938284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:53:28.038445 1938284 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 11:53:28.038471 1938284 start.go:496] detecting cgroup driver to use...
	I1217 11:53:28.038508 1938284 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:53:28.038602 1938284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:53:28.058668 1938284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:53:28.072180 1938284 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:53:28.072232 1938284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:53:28.092633 1938284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:53:28.112264 1938284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:53:28.220825 1938284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:53:28.322777 1938284 docker.go:234] disabling docker service ...
	I1217 11:53:28.322854 1938284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:53:28.343186 1938284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:53:28.358091 1938284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:53:28.445102 1938284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:53:28.540375 1938284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:53:28.554263 1938284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:53:28.569562 1938284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:53:28.569632 1938284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:53:28.581000 1938284 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:53:28.581082 1938284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:53:28.590666 1938284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:53:28.600754 1938284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:53:28.610521 1938284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:53:28.619124 1938284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:53:28.628838 1938284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:53:28.644245 1938284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:53:28.653799 1938284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:53:28.662037 1938284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:53:28.669873 1938284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:53:28.750764 1938284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:53:29.016744 1938284 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:53:29.016825 1938284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:53:29.021774 1938284 start.go:564] Will wait 60s for crictl version
	I1217 11:53:29.021828 1938284 ssh_runner.go:195] Run: which crictl
	I1217 11:53:29.025919 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:53:29.052850 1938284 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:53:29.052937 1938284 ssh_runner.go:195] Run: crio --version
	I1217 11:53:29.085657 1938284 ssh_runner.go:195] Run: crio --version
	I1217 11:53:29.119423 1938284 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 11:53:29.120820 1938284 cli_runner.go:164] Run: docker network inspect no-preload-737478 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:53:29.143574 1938284 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 11:53:29.147687 1938284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:53:29.158883 1938284 kubeadm.go:884] updating cluster {Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:53:29.158994 1938284 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:53:29.159043 1938284 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:53:29.188160 1938284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1217 11:53:29.188191 1938284 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 11:53:29.188248 1938284 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:53:29.188277 1938284 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:53:29.188510 1938284 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:29.188521 1938284 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:53:29.188550 1938284 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:53:29.188594 1938284 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:29.188509 1938284 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:29.188521 1938284 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 11:53:29.189589 1938284 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:53:29.189611 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:53:29.189638 1938284 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:29.189649 1938284 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 11:53:29.189901 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:29.190001 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:53:29.190051 1938284 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:29.190154 1938284 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:53:29.289589 1938284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:29.292352 1938284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:29.307474 1938284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:29.317166 1938284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:53:29.333025 1938284 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce" in container runtime
	I1217 11:53:29.333061 1938284 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:29.333112 1938284 ssh_runner.go:195] Run: which crictl
	I1217 11:53:29.333264 1938284 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc" in container runtime
	I1217 11:53:29.333431 1938284 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:29.333468 1938284 ssh_runner.go:195] Run: which crictl
	I1217 11:53:29.350239 1938284 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1217 11:53:29.350288 1938284 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:29.350371 1938284 ssh_runner.go:195] Run: which crictl
	I1217 11:53:29.358456 1938284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:53:29.363069 1938284 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1217 11:53:29.363126 1938284 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:53:29.363138 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:29.363171 1938284 ssh_runner.go:195] Run: which crictl
	I1217 11:53:29.363207 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:29.363257 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:29.374806 1938284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1217 11:53:29.379283 1938284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:53:29.416827 1938284 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a" in container runtime
	I1217 11:53:29.416886 1938284 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:53:29.416953 1938284 ssh_runner.go:195] Run: which crictl
	I1217 11:53:29.417101 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:29.417163 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:29.417212 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:29.417240 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:53:29.429024 1938284 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1217 11:53:29.429088 1938284 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 11:53:29.429160 1938284 ssh_runner.go:195] Run: which crictl
	I1217 11:53:29.432230 1938284 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614" in container runtime
	I1217 11:53:29.432281 1938284 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:53:29.432329 1938284 ssh_runner.go:195] Run: which crictl
	I1217 11:53:29.451523 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:53:29.451714 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:53:29.454879 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:53:29.458435 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 11:53:29.458451 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:53:29.458518 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 11:53:29.458626 1938284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	
	
	==> CRI-O <==
	Dec 17 11:52:52 old-k8s-version-401285 crio[605]: time="2025-12-17T11:52:52.036971708Z" level=info msg="Created container 7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-klmw2/kubernetes-dashboard" id=48a9223e-2e5a-499f-a9c1-a25ef780cb58 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:52:52 old-k8s-version-401285 crio[605]: time="2025-12-17T11:52:52.037597543Z" level=info msg="Starting container: 7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e" id=756f9031-743c-4453-bf87-a625bfbb36e3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:52:52 old-k8s-version-401285 crio[605]: time="2025-12-17T11:52:52.039590843Z" level=info msg="Started container" PID=1785 containerID=7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-klmw2/kubernetes-dashboard id=756f9031-743c-4453-bf87-a625bfbb36e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=834a65035c3c169426fda832b31529d7d07a98e966ca35e4681456a4a6f6364c
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.667215882Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e9b5011e-ce26-43e5-a1a4-246ccedeef7e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.668232244Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0086016b-b336-4d75-a05f-ba6caa77e95a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.669300604Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=17d41051-3c53-483c-8b2d-bc0fd02d0091 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.669471053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.67436617Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.67459191Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e7bdeddb948a757bad49f33a46ff4e8d01624111cf740eff930a93610ad78b13/merged/etc/passwd: no such file or directory"
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.6746256Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e7bdeddb948a757bad49f33a46ff4e8d01624111cf740eff930a93610ad78b13/merged/etc/group: no such file or directory"
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.674920278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.706194503Z" level=info msg="Created container 8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60: kube-system/storage-provisioner/storage-provisioner" id=17d41051-3c53-483c-8b2d-bc0fd02d0091 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.706903923Z" level=info msg="Starting container: 8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60" id=ac6fc2c5-d0bf-41b8-a11f-e0741f67fbc6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:53:04 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:04.708782676Z" level=info msg="Started container" PID=1810 containerID=8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60 description=kube-system/storage-provisioner/storage-provisioner id=ac6fc2c5-d0bf-41b8-a11f-e0741f67fbc6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=61cdf9ad55c16f6f387cd3de8421128eb1af073de5d5fd84ed53940ba879a4bf
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.54851424Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3b522475-a243-4f4d-bb7b-45693f529cea name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.549666405Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6243b904-6d43-488a-bd62-848390af1815 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.550800908Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v/dashboard-metrics-scraper" id=30a5364a-6a4c-4574-b49e-51bc0a46d7a0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.550942821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.557063135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.557735585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.586709903Z" level=info msg="Created container 792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v/dashboard-metrics-scraper" id=30a5364a-6a4c-4574-b49e-51bc0a46d7a0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.587576335Z" level=info msg="Starting container: 792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b" id=65f449af-3c75-4639-9cc8-e5cdce780cf1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.589515902Z" level=info msg="Started container" PID=1826 containerID=792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v/dashboard-metrics-scraper id=65f449af-3c75-4639-9cc8-e5cdce780cf1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=18a0f40d9f728208fdef42d46f74ecad264ec9717388155d2d65c78abaca993f
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.689165465Z" level=info msg="Removing container: 531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7" id=22f5e297-0c2f-4ed4-936c-b3cf12f4c3ef name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:53:11 old-k8s-version-401285 crio[605]: time="2025-12-17T11:53:11.700408829Z" level=info msg="Removed container 531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v/dashboard-metrics-scraper" id=22f5e297-0c2f-4ed4-936c-b3cf12f4c3ef name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	792abf94632c8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   18a0f40d9f728       dashboard-metrics-scraper-5f989dc9cf-prh7v       kubernetes-dashboard
	8f100212ae2fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago      Running             storage-provisioner         1                   61cdf9ad55c16       storage-provisioner                              kube-system
	7e9d90fea5777       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   834a65035c3c1       kubernetes-dashboard-8694d4445c-klmw2            kubernetes-dashboard
	ffd23d6beaaa5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago      Running             busybox                     1                   80a76c3dda141       busybox                                          default
	58dc04f8562f4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago      Running             coredns                     0                   122c75942cc9a       coredns-5dd5756b68-nkbwq                         kube-system
	70af8698b9024       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           57 seconds ago      Running             kindnet-cni                 0                   9b4695112bb31       kindnet-dmn7l                                    kube-system
	cde52efb57536       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago      Exited              storage-provisioner         0                   61cdf9ad55c16       storage-provisioner                              kube-system
	3a799da8c5774       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago      Running             kube-proxy                  0                   5966e2cde48f8       kube-proxy-5867r                                 kube-system
	2f758407de6a5       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   174d4da11964a       kube-controller-manager-old-k8s-version-401285   kube-system
	9aa49d40045e1       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   3913c4020003f       kube-scheduler-old-k8s-version-401285            kube-system
	149391f8debc5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   d5179f4a807d2       etcd-old-k8s-version-401285                      kube-system
	8f15dd64ca827       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   8ebf8c50641f4       kube-apiserver-old-k8s-version-401285            kube-system
	
	
	==> coredns [58dc04f8562f4834da267a4a4e1e01fea4f3965d999f69cb7a337c022308ca4a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57122 - 50041 "HINFO IN 6731898185667097896.3061840381048538122. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028662591s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-401285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-401285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=old-k8s-version-401285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_51_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:51:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-401285
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:53:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:53:04 +0000   Wed, 17 Dec 2025 11:51:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:53:04 +0000   Wed, 17 Dec 2025 11:51:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:53:04 +0000   Wed, 17 Dec 2025 11:51:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:53:04 +0000   Wed, 17 Dec 2025 11:51:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-401285
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                025c44d6-1ab3-4126-8994-078d0fca59b0
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-nkbwq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-401285                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-dmn7l                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-401285             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-401285    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-5867r                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-401285             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-prh7v        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-klmw2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 57s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-401285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-401285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-401285 event: Registered Node old-k8s-version-401285 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-401285 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x9 over 61s)      kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node old-k8s-version-401285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x7 over 61s)      kubelet          Node old-k8s-version-401285 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node old-k8s-version-401285 event: Registered Node old-k8s-version-401285 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [149391f8debc5ff3a0624fc6350eb74473e08e63dc1f13dba71547b6cbc7f5ca] <==
	{"level":"info","ts":"2025-12-17T11:52:31.132935Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T11:52:31.132946Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T11:52:31.13306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-17T11:52:31.133154Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-17T11:52:31.133305Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T11:52:31.133349Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T11:52:31.135686Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-17T11:52:31.136258Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T11:52:31.136309Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T11:52:31.137067Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T11:52:31.137123Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T11:52:32.322876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T11:52:32.322926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T11:52:32.322947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-17T11:52:32.322962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T11:52:32.32297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-17T11:52:32.32298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T11:52:32.322989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-17T11:52:32.324157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:52:32.32417Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-401285 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T11:52:32.324182Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:52:32.32444Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T11:52:32.32449Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T11:52:32.32536Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-17T11:52:32.325788Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:53:31 up  5:35,  0 user,  load average: 2.19, 2.48, 1.86
	Linux old-k8s-version-401285 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [70af8698b90248c8d07b9effdf600c38738c38914120182e85c36977d7916bf2] <==
	I1217 11:52:34.120054       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:52:34.120306       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 11:52:34.120485       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:52:34.120506       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:52:34.120560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:52:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:52:34.418061       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:52:34.418108       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:52:34.418121       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:52:34.516930       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:52:34.804748       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:52:34.804782       1 metrics.go:72] Registering metrics
	I1217 11:52:34.804854       1 controller.go:711] "Syncing nftables rules"
	I1217 11:52:44.417821       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:52:44.417895       1 main.go:301] handling current node
	I1217 11:52:54.418671       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:52:54.418725       1 main.go:301] handling current node
	I1217 11:53:04.418472       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:53:04.418518       1 main.go:301] handling current node
	I1217 11:53:14.419435       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:53:14.419477       1 main.go:301] handling current node
	I1217 11:53:24.425319       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:53:24.425361       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f15dd64ca827bde9a31635cadaed80039200397a5a70c03ee468cf1952c4c87] <==
	I1217 11:52:33.410299       1 aggregator.go:166] initial CRD sync complete...
	I1217 11:52:33.410313       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 11:52:33.410334       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:52:33.410342       1 cache.go:39] Caches are synced for autoregister controller
	I1217 11:52:33.410443       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1217 11:52:33.410585       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I1217 11:52:33.413079       1 shared_informer.go:318] Caches are synced for configmaps
	I1217 11:52:33.447136       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1217 11:52:34.239601       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 11:52:34.272818       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 11:52:34.290150       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:52:34.297186       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:52:34.305134       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 11:52:34.313333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:52:34.346099       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.14.142"}
	I1217 11:52:34.370492       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.122.9"}
	E1217 11:52:43.411327       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I1217 11:52:45.493543       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:52:45.559123       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1217 11:52:45.559123       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1217 11:52:45.647936       1 controller.go:624] quota admission added evaluator for: endpoints
	E1217 11:52:53.411953       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1217 11:53:03.412935       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1217 11:53:13.413388       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1217 11:53:23.414206       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [2f758407de6a5197364df61528ddb100122e25164fa97424a91e1cfbf63d5b32] <==
	I1217 11:52:45.755582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="194.049989ms"
	I1217 11:52:45.755772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.566µs"
	I1217 11:52:45.756906       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-prh7v"
	I1217 11:52:45.756936       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-klmw2"
	I1217 11:52:45.764595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="201.466345ms"
	I1217 11:52:45.764876       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 11:52:45.765128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="203.523531ms"
	I1217 11:52:45.772380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.154507ms"
	I1217 11:52:45.772427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.786186ms"
	I1217 11:52:45.772597       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.575µs"
	I1217 11:52:45.772616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="116.415µs"
	I1217 11:52:45.774217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.669µs"
	I1217 11:52:45.783840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.211µs"
	I1217 11:52:46.084856       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 11:52:46.163243       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 11:52:46.163280       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 11:52:48.629934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.836µs"
	I1217 11:52:49.639318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="196.038µs"
	I1217 11:52:50.641683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="104.906µs"
	I1217 11:52:52.650834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.613501ms"
	I1217 11:52:52.650954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="70.062µs"
	I1217 11:53:11.700689       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.325µs"
	I1217 11:53:12.052301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.403072ms"
	I1217 11:53:12.052410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.153µs"
	I1217 11:53:16.076150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.461µs"
	
	
	==> kube-proxy [3a799da8c577475c5da3a3846bd74a1474d4f4d9552c749aa00155b4a2b65fd9] <==
	I1217 11:52:33.964455       1 server_others.go:69] "Using iptables proxy"
	I1217 11:52:33.975846       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1217 11:52:33.995979       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:52:33.999208       1 server_others.go:152] "Using iptables Proxier"
	I1217 11:52:33.999247       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 11:52:33.999253       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 11:52:33.999284       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 11:52:33.999478       1 server.go:846] "Version info" version="v1.28.0"
	I1217 11:52:33.999518       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:52:34.000230       1 config.go:188] "Starting service config controller"
	I1217 11:52:34.000249       1 config.go:97] "Starting endpoint slice config controller"
	I1217 11:52:34.000266       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 11:52:34.000296       1 config.go:315] "Starting node config controller"
	I1217 11:52:34.000335       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 11:52:34.000266       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 11:52:34.101240       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 11:52:34.101271       1 shared_informer.go:318] Caches are synced for node config
	I1217 11:52:34.101303       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [9aa49d40045e1e67467ef562959460e7790cb28aff33d0ead73eb299efd0348c] <==
	I1217 11:52:31.633777       1 serving.go:348] Generated self-signed cert in-memory
	I1217 11:52:33.378013       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1217 11:52:33.378035       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:52:33.381756       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1217 11:52:33.381781       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1217 11:52:33.381785       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:52:33.381817       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1217 11:52:33.381843       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 11:52:33.381865       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1217 11:52:33.382692       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1217 11:52:33.382790       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1217 11:52:33.482229       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1217 11:52:33.482276       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1217 11:52:33.482268       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 11:52:45 old-k8s-version-401285 kubelet[772]: I1217 11:52:45.817825     772 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2bx\" (UniqueName: \"kubernetes.io/projected/ad3e2463-5388-453d-8fe6-25428420edfd-kube-api-access-nn2bx\") pod \"kubernetes-dashboard-8694d4445c-klmw2\" (UID: \"ad3e2463-5388-453d-8fe6-25428420edfd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-klmw2"
	Dec 17 11:52:45 old-k8s-version-401285 kubelet[772]: I1217 11:52:45.817873     772 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ad3e2463-5388-453d-8fe6-25428420edfd-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-klmw2\" (UID: \"ad3e2463-5388-453d-8fe6-25428420edfd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-klmw2"
	Dec 17 11:52:45 old-k8s-version-401285 kubelet[772]: I1217 11:52:45.817896     772 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zxhq\" (UniqueName: \"kubernetes.io/projected/f99f24f7-0927-4395-9cfd-e0b94f087da2-kube-api-access-7zxhq\") pod \"dashboard-metrics-scraper-5f989dc9cf-prh7v\" (UID: \"f99f24f7-0927-4395-9cfd-e0b94f087da2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v"
	Dec 17 11:52:45 old-k8s-version-401285 kubelet[772]: I1217 11:52:45.817977     772 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f99f24f7-0927-4395-9cfd-e0b94f087da2-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-prh7v\" (UID: \"f99f24f7-0927-4395-9cfd-e0b94f087da2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v"
	Dec 17 11:52:48 old-k8s-version-401285 kubelet[772]: I1217 11:52:48.618585     772 scope.go:117] "RemoveContainer" containerID="5ab96aa43f9a2fe730bebb3eef8edf1be0219da2566c0c234e28edaaf5721925"
	Dec 17 11:52:49 old-k8s-version-401285 kubelet[772]: I1217 11:52:49.624202     772 scope.go:117] "RemoveContainer" containerID="5ab96aa43f9a2fe730bebb3eef8edf1be0219da2566c0c234e28edaaf5721925"
	Dec 17 11:52:49 old-k8s-version-401285 kubelet[772]: I1217 11:52:49.624582     772 scope.go:117] "RemoveContainer" containerID="531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7"
	Dec 17 11:52:49 old-k8s-version-401285 kubelet[772]: E1217 11:52:49.625158     772 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-prh7v_kubernetes-dashboard(f99f24f7-0927-4395-9cfd-e0b94f087da2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v" podUID="f99f24f7-0927-4395-9cfd-e0b94f087da2"
	Dec 17 11:52:50 old-k8s-version-401285 kubelet[772]: I1217 11:52:50.628607     772 scope.go:117] "RemoveContainer" containerID="531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7"
	Dec 17 11:52:50 old-k8s-version-401285 kubelet[772]: E1217 11:52:50.628951     772 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-prh7v_kubernetes-dashboard(f99f24f7-0927-4395-9cfd-e0b94f087da2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v" podUID="f99f24f7-0927-4395-9cfd-e0b94f087da2"
	Dec 17 11:52:52 old-k8s-version-401285 kubelet[772]: I1217 11:52:52.645318     772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-klmw2" podStartSLOduration=1.731988817 podCreationTimestamp="2025-12-17 11:52:45 +0000 UTC" firstStartedPulling="2025-12-17 11:52:46.088793227 +0000 UTC m=+15.640512990" lastFinishedPulling="2025-12-17 11:52:52.002045828 +0000 UTC m=+21.553765589" observedRunningTime="2025-12-17 11:52:52.644869294 +0000 UTC m=+22.196589064" watchObservedRunningTime="2025-12-17 11:52:52.645241416 +0000 UTC m=+22.196961186"
	Dec 17 11:52:56 old-k8s-version-401285 kubelet[772]: I1217 11:52:56.066585     772 scope.go:117] "RemoveContainer" containerID="531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7"
	Dec 17 11:52:56 old-k8s-version-401285 kubelet[772]: E1217 11:52:56.066860     772 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-prh7v_kubernetes-dashboard(f99f24f7-0927-4395-9cfd-e0b94f087da2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v" podUID="f99f24f7-0927-4395-9cfd-e0b94f087da2"
	Dec 17 11:53:04 old-k8s-version-401285 kubelet[772]: I1217 11:53:04.666758     772 scope.go:117] "RemoveContainer" containerID="cde52efb575362eca44dd3923d8c68b38b0d426bac72946a99c7c44ff4812dcb"
	Dec 17 11:53:11 old-k8s-version-401285 kubelet[772]: I1217 11:53:11.547670     772 scope.go:117] "RemoveContainer" containerID="531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7"
	Dec 17 11:53:11 old-k8s-version-401285 kubelet[772]: I1217 11:53:11.687932     772 scope.go:117] "RemoveContainer" containerID="531751352f5ddd8527830fa2d01f600c39ced65b8f94f9c8a952a882bf6a70f7"
	Dec 17 11:53:11 old-k8s-version-401285 kubelet[772]: I1217 11:53:11.688149     772 scope.go:117] "RemoveContainer" containerID="792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b"
	Dec 17 11:53:11 old-k8s-version-401285 kubelet[772]: E1217 11:53:11.688522     772 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-prh7v_kubernetes-dashboard(f99f24f7-0927-4395-9cfd-e0b94f087da2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v" podUID="f99f24f7-0927-4395-9cfd-e0b94f087da2"
	Dec 17 11:53:16 old-k8s-version-401285 kubelet[772]: I1217 11:53:16.066313     772 scope.go:117] "RemoveContainer" containerID="792abf94632c8988c1d50fdea60cd67561287eb6ade2b684c0635879b204ad3b"
	Dec 17 11:53:16 old-k8s-version-401285 kubelet[772]: E1217 11:53:16.066787     772 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-prh7v_kubernetes-dashboard(f99f24f7-0927-4395-9cfd-e0b94f087da2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-prh7v" podUID="f99f24f7-0927-4395-9cfd-e0b94f087da2"
	Dec 17 11:53:26 old-k8s-version-401285 kubelet[772]: I1217 11:53:26.162339     772 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 17 11:53:26 old-k8s-version-401285 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:53:26 old-k8s-version-401285 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:53:26 old-k8s-version-401285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:26 old-k8s-version-401285 systemd[1]: kubelet.service: Consumed 1.614s CPU time.
	
	
	==> kubernetes-dashboard [7e9d90fea57778802550e21a5890123fba110520ad22bb1729e518d6eff4b78e] <==
	2025/12/17 11:52:52 Starting overwatch
	2025/12/17 11:52:52 Using namespace: kubernetes-dashboard
	2025/12/17 11:52:52 Using in-cluster config to connect to apiserver
	2025/12/17 11:52:52 Using secret token for csrf signing
	2025/12/17 11:52:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 11:52:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 11:52:52 Successful initial request to the apiserver, version: v1.28.0
	2025/12/17 11:52:52 Generating JWE encryption key
	2025/12/17 11:52:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 11:52:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 11:52:52 Initializing JWE encryption key from synchronized object
	2025/12/17 11:52:52 Creating in-cluster Sidecar client
	2025/12/17 11:52:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 11:52:52 Serving insecurely on HTTP port: 9090
	2025/12/17 11:53:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8f100212ae2fab422e64ea40d8888d1a77a118495fe5ca25767cbcca9e72fc60] <==
	I1217 11:53:04.720755       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:53:04.729245       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:53:04.729289       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 11:53:22.123260       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:53:22.123326       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca339107-09fc-488a-9de1-8033a0f945ef", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-401285_d566a4c7-e57e-4936-899e-6e544d6682fb became leader
	I1217 11:53:22.123409       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-401285_d566a4c7-e57e-4936-899e-6e544d6682fb!
	I1217 11:53:22.223644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-401285_d566a4c7-e57e-4936-899e-6e544d6682fb!
	
	
	==> storage-provisioner [cde52efb575362eca44dd3923d8c68b38b0d426bac72946a99c7c44ff4812dcb] <==
	I1217 11:52:33.924395       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 11:53:03.927909       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-401285 -n old-k8s-version-401285
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-401285 -n old-k8s-version-401285: exit status 2 (348.381527ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-401285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (272.095714ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:54:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-737478 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-737478 describe deploy/metrics-server -n kube-system: exit status 1 (65.439793ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-737478 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-737478
helpers_test.go:244: (dbg) docker inspect no-preload-737478:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87",
	        "Created": "2025-12-17T11:53:25.367483082Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1938848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:53:25.412896224Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/hosts",
	        "LogPath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87-json.log",
	        "Name": "/no-preload-737478",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-737478:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-737478",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87",
	                "LowerDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276/merged",
	                "UpperDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276/diff",
	                "WorkDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-737478",
	                "Source": "/var/lib/docker/volumes/no-preload-737478/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-737478",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-737478",
	                "name.minikube.sigs.k8s.io": "no-preload-737478",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4e0386e3ea460656745270903645954a156574712061a35800916ae488581851",
	            "SandboxKey": "/var/run/docker/netns/4e0386e3ea46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34601"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34602"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34605"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34603"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34604"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-737478": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c30ab0942ebedfa9daed9e159e1243b5098ef936ff9c2403568c9e33b8451ef1",
	                    "EndpointID": "81c0bcf20d96fe236ea85fa0945bddc8147f62f5a8ef33dcee2100a4c2b721eb",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "96:45:f7:8f:f2:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-737478",
	                        "7dea84a847e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737478 -n no-preload-737478
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-737478 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-737478 logs -n 25: (1.290983642s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p cert-options-714247 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                          │ cert-options-714247          │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:51 UTC │
	│ ssh     │ cert-options-714247 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                        │ cert-options-714247          │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ ssh     │ -p cert-options-714247 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                      │ cert-options-714247          │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ delete  │ -p cert-options-714247                                                                                                                                                                                                                             │ cert-options-714247          │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ start   │ -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-401285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │                     │
	│ stop    │ -p old-k8s-version-401285 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-401285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:52 UTC │
	│ start   │ -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p cert-expiration-067996 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                          │ cert-expiration-067996       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p cert-expiration-067996                                                                                                                                                                                                                          │ cert-expiration-067996       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ image   │ old-k8s-version-401285 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ pause   │ -p old-k8s-version-401285 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p kubernetes-upgrade-556754                                                                                                                                                                                                                       │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p disable-driver-mounts-618082                                                                                                                                                                                                                    │ disable-driver-mounts-618082 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ delete  │ -p stopped-upgrade-287611                                                                                                                                                                                                                          │ stopped-upgrade-287611       │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:54:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:54:03.244695 1952673 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:54:03.244946 1952673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:03.244954 1952673 out.go:374] Setting ErrFile to fd 2...
	I1217 11:54:03.244959 1952673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:03.245146 1952673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:54:03.245673 1952673 out.go:368] Setting JSON to false
	I1217 11:54:03.246939 1952673 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20188,"bootTime":1765952255,"procs":429,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:54:03.247003 1952673 start.go:143] virtualization: kvm guest
	I1217 11:54:03.249619 1952673 out.go:179] * [newest-cni-601829] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:54:03.250900 1952673 notify.go:221] Checking for updates...
	I1217 11:54:03.250925 1952673 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:54:03.252237 1952673 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:54:03.254135 1952673 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:03.257960 1952673 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:54:03.259282 1952673 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:54:03.260497 1952673 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:54:03.262213 1952673 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:54:03.262371 1952673 config.go:182] Loaded profile config "embed-certs-542273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:54:03.262462 1952673 config.go:182] Loaded profile config "no-preload-737478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:03.262584 1952673 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:54:03.288960 1952673 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:54:03.289072 1952673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:03.350699 1952673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 11:54:03.339679509 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:03.350861 1952673 docker.go:319] overlay module found
	I1217 11:54:03.352886 1952673 out.go:179] * Using the docker driver based on user configuration
	I1217 11:54:03.354255 1952673 start.go:309] selected driver: docker
	I1217 11:54:03.354272 1952673 start.go:927] validating driver "docker" against <nil>
	I1217 11:54:03.354284 1952673 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:54:03.354866 1952673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:03.418358 1952673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 11:54:03.407378494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:03.418793 1952673 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 11:54:03.418858 1952673 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 11:54:03.419157 1952673 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:54:03.423693 1952673 out.go:179] * Using Docker driver with root privileges
	I1217 11:54:03.425202 1952673 cni.go:84] Creating CNI manager for ""
	I1217 11:54:03.425298 1952673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:03.425311 1952673 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:54:03.425416 1952673 start.go:353] cluster config:
	{Name:newest-cni-601829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:03.427017 1952673 out.go:179] * Starting "newest-cni-601829" primary control-plane node in "newest-cni-601829" cluster
	I1217 11:54:03.428206 1952673 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:54:03.429483 1952673 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:54:03.430934 1952673 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:54:03.430977 1952673 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 11:54:03.431000 1952673 cache.go:65] Caching tarball of preloaded images
	I1217 11:54:03.431042 1952673 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:54:03.431110 1952673 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:54:03.431123 1952673 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 11:54:03.431235 1952673 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/config.json ...
	I1217 11:54:03.431266 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/config.json: {Name:mk2d370f2ff2347a1af47e8ce66acf5877fe4672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:03.456193 1952673 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:54:03.456244 1952673 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:54:03.456274 1952673 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:54:03.456318 1952673 start.go:360] acquireMachinesLock for newest-cni-601829: {Name:mk9faceab19a04d2aa54df7eaada9c8c27536be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:03.456450 1952673 start.go:364] duration metric: took 104.148µs to acquireMachinesLock for "newest-cni-601829"
	I1217 11:54:03.456487 1952673 start.go:93] Provisioning new machine with config: &{Name:newest-cni-601829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:54:03.456595 1952673 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:54:02.206623 1943967 out.go:252]   - Configuring RBAC rules ...
	I1217 11:54:02.206808 1943967 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:54:02.210930 1943967 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:54:02.218082 1943967 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:54:02.223874 1943967 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:54:02.227076 1943967 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:54:02.230464 1943967 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:54:02.567052 1943967 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:54:02.983242 1943967 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:54:03.563597 1943967 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:54:03.564512 1943967 kubeadm.go:319] 
	I1217 11:54:03.564612 1943967 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:54:03.564625 1943967 kubeadm.go:319] 
	I1217 11:54:03.564718 1943967 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:54:03.564728 1943967 kubeadm.go:319] 
	I1217 11:54:03.564758 1943967 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:54:03.564856 1943967 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:54:03.564968 1943967 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:54:03.564990 1943967 kubeadm.go:319] 
	I1217 11:54:03.565073 1943967 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:54:03.565083 1943967 kubeadm.go:319] 
	I1217 11:54:03.565148 1943967 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:54:03.565159 1943967 kubeadm.go:319] 
	I1217 11:54:03.565224 1943967 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:54:03.565327 1943967 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:54:03.565427 1943967 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:54:03.565440 1943967 kubeadm.go:319] 
	I1217 11:54:03.565574 1943967 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:54:03.565690 1943967 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:54:03.565700 1943967 kubeadm.go:319] 
	I1217 11:54:03.565827 1943967 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wvm0wt.yk7k376wwexjgpk5 \
	I1217 11:54:03.566018 1943967 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:54:03.566049 1943967 kubeadm.go:319] 	--control-plane 
	I1217 11:54:03.566053 1943967 kubeadm.go:319] 
	I1217 11:54:03.566126 1943967 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:54:03.566133 1943967 kubeadm.go:319] 
	I1217 11:54:03.566203 1943967 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wvm0wt.yk7k376wwexjgpk5 \
	I1217 11:54:03.566293 1943967 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:54:03.569103 1943967 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:54:03.569289 1943967 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:54:03.569323 1943967 cni.go:84] Creating CNI manager for ""
	I1217 11:54:03.569334 1943967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:03.571641 1943967 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1217 11:54:01.173599 1938284 node_ready.go:57] node "no-preload-737478" has "Ready":"False" status (will retry)
	W1217 11:54:03.674139 1938284 node_ready.go:57] node "no-preload-737478" has "Ready":"False" status (will retry)
	I1217 11:54:00.493039 1949672 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-382022:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.420245328s)
	I1217 11:54:00.494073 1949672 kic.go:203] duration metric: took 4.421432015s to extract preloaded images to volume ...
	W1217 11:54:00.494339 1949672 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 11:54:00.494470 1949672 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 11:54:00.494569 1949672 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:54:00.587308 1949672 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-382022 --name default-k8s-diff-port-382022 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-382022 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-382022 --network default-k8s-diff-port-382022 --ip 192.168.76.2 --volume default-k8s-diff-port-382022:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:54:01.174656 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Running}}
	I1217 11:54:01.193864 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:01.213087 1949672 cli_runner.go:164] Run: docker exec default-k8s-diff-port-382022 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:54:01.262496 1949672 oci.go:144] the created container "default-k8s-diff-port-382022" has a running status.
	I1217 11:54:01.262578 1949672 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa...
	I1217 11:54:01.400315 1949672 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 11:54:01.438617 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:01.461515 1949672 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:54:01.461569 1949672 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-382022 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:54:01.528311 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:01.551286 1949672 machine.go:94] provisionDockerMachine start ...
	I1217 11:54:01.551394 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:01.576202 1949672 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:01.576504 1949672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I1217 11:54:01.576520 1949672 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:54:01.719918 1949672 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382022
	
	I1217 11:54:01.719956 1949672 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-382022"
	I1217 11:54:01.720039 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:01.743427 1949672 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:01.743773 1949672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I1217 11:54:01.743799 1949672 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-382022 && echo "default-k8s-diff-port-382022" | sudo tee /etc/hostname
	I1217 11:54:01.897201 1949672 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382022
	
	I1217 11:54:01.897282 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:01.920720 1949672 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:01.921007 1949672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I1217 11:54:01.921043 1949672 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-382022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-382022/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-382022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:54:02.059020 1949672 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:54:02.059057 1949672 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:54:02.059085 1949672 ubuntu.go:190] setting up certificates
	I1217 11:54:02.059102 1949672 provision.go:84] configureAuth start
	I1217 11:54:02.059189 1949672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:54:02.082451 1949672 provision.go:143] copyHostCerts
	I1217 11:54:02.082521 1949672 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:54:02.082563 1949672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:54:02.082635 1949672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:54:02.082759 1949672 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:54:02.082773 1949672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:54:02.082809 1949672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:54:02.082904 1949672 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:54:02.082917 1949672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:54:02.082967 1949672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:54:02.083053 1949672 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-382022 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-382022 localhost minikube]
	I1217 11:54:02.119172 1949672 provision.go:177] copyRemoteCerts
	I1217 11:54:02.119240 1949672 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:54:02.119304 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:02.139354 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:02.239050 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 11:54:02.260116 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:54:02.279598 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 11:54:02.298374 1949672 provision.go:87] duration metric: took 239.251571ms to configureAuth
	I1217 11:54:02.298406 1949672 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:54:02.298603 1949672 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:54:02.298789 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:02.317959 1949672 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:02.318254 1949672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I1217 11:54:02.318274 1949672 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:54:02.648494 1949672 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:54:02.648524 1949672 machine.go:97] duration metric: took 1.097212077s to provisionDockerMachine
	I1217 11:54:02.648554 1949672 client.go:176] duration metric: took 7.254405796s to LocalClient.Create
	I1217 11:54:02.648579 1949672 start.go:167] duration metric: took 7.254501293s to libmachine.API.Create "default-k8s-diff-port-382022"
	I1217 11:54:02.648590 1949672 start.go:293] postStartSetup for "default-k8s-diff-port-382022" (driver="docker")
	I1217 11:54:02.648607 1949672 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:54:02.648682 1949672 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:54:02.648736 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:02.670640 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:02.775360 1949672 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:54:02.780680 1949672 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:54:02.780722 1949672 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:54:02.780738 1949672 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:54:02.780805 1949672 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:54:02.780899 1949672 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:54:02.781026 1949672 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:54:02.792231 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:02.819579 1949672 start.go:296] duration metric: took 170.967603ms for postStartSetup
	I1217 11:54:02.820010 1949672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:54:02.843246 1949672 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/config.json ...
	I1217 11:54:02.843608 1949672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:54:02.843697 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:02.873156 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:02.975092 1949672 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:54:02.981423 1949672 start.go:128] duration metric: took 7.589272064s to createHost
	I1217 11:54:02.981457 1949672 start.go:83] releasing machines lock for "default-k8s-diff-port-382022", held for 7.589490305s
	I1217 11:54:02.981561 1949672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:54:03.008307 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:54:03.008401 1949672 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:54:03.008424 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:54:03.008472 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:54:03.008510 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:54:03.008563 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:54:03.008630 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:03.008724 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:54:03.008784 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:03.030211 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:03.141004 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:54:03.161797 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:54:03.181838 1949672 ssh_runner.go:195] Run: openssl version
	I1217 11:54:03.188734 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:54:03.197800 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:54:03.206389 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:54:03.210889 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:54:03.210964 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:54:03.252036 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:54:03.261872 1949672 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1672941.pem /etc/ssl/certs/51391683.0
	I1217 11:54:03.270735 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:54:03.280714 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:54:03.290850 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:54:03.295389 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:54:03.295458 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:54:03.344285 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:54:03.354092 1949672 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16729412.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:54:03.362891 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:03.371247 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:54:03.384516 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:03.389783 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:03.389862 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:03.435806 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:54:03.446736 1949672 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:54:03.456667 1949672 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:54:03.461284 1949672 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:54:03.465385 1949672 ssh_runner.go:195] Run: cat /version.json
	I1217 11:54:03.465467 1949672 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:54:03.469963 1949672 ssh_runner.go:195] Run: systemctl --version
	I1217 11:54:03.535799 1949672 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:54:03.585558 1949672 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:54:03.591717 1949672 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:54:03.591801 1949672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:54:03.624994 1949672 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 11:54:03.625025 1949672 start.go:496] detecting cgroup driver to use...
	I1217 11:54:03.625064 1949672 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:54:03.625134 1949672 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:54:03.647522 1949672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:54:03.666024 1949672 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:54:03.666087 1949672 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:54:03.691460 1949672 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:54:03.716853 1949672 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:54:03.830163 1949672 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:54:03.965318 1949672 docker.go:234] disabling docker service ...
	I1217 11:54:03.965389 1949672 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:54:04.000039 1949672 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:54:04.018991 1949672 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:54:04.137333 1949672 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:54:04.241805 1949672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:54:04.257629 1949672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:54:04.274423 1949672 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:54:04.274514 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.288085 1949672 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:54:04.288159 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.300633 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.310695 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.321774 1949672 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:54:04.332167 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.342059 1949672 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.359057 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.369920 1949672 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:54:04.378871 1949672 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:54:04.388126 1949672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:04.479654 1949672 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:54:05.175060 1949672 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:54:05.175133 1949672 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:54:05.179563 1949672 start.go:564] Will wait 60s for crictl version
	I1217 11:54:05.179632 1949672 ssh_runner.go:195] Run: which crictl
	I1217 11:54:05.183637 1949672 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:54:05.213404 1949672 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:54:05.213500 1949672 ssh_runner.go:195] Run: crio --version
	I1217 11:54:05.245866 1949672 ssh_runner.go:195] Run: crio --version
	I1217 11:54:05.283750 1949672 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 11:54:03.573203 1943967 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:54:03.579017 1943967 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 11:54:03.579037 1943967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:54:03.597054 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:54:03.872723 1943967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:54:03.872887 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:03.872980 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-542273 minikube.k8s.io/updated_at=2025_12_17T11_54_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=embed-certs-542273 minikube.k8s.io/primary=true
	I1217 11:54:03.891289 1943967 ops.go:34] apiserver oom_adj: -16
	I1217 11:54:03.998627 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:04.499505 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:04.998656 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:05.498670 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:05.285185 1949672 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-382022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:54:05.307642 1949672 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:54:05.312261 1949672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:54:05.323253 1949672 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:54:05.323405 1949672 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:54:05.323466 1949672 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:54:05.364791 1949672 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:54:05.364821 1949672 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:54:05.364879 1949672 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:54:05.394380 1949672 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:54:05.394408 1949672 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:54:05.394418 1949672 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.3 crio true true} ...
	I1217 11:54:05.394544 1949672 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-382022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:54:05.394637 1949672 ssh_runner.go:195] Run: crio config
	I1217 11:54:05.446258 1949672 cni.go:84] Creating CNI manager for ""
	I1217 11:54:05.446293 1949672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:05.446328 1949672 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:54:05.446366 1949672 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-382022 NodeName:default-k8s-diff-port-382022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:54:05.446575 1949672 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-382022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:54:05.446670 1949672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:54:05.455762 1949672 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:54:05.455842 1949672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:54:05.465013 1949672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 11:54:05.479958 1949672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:54:05.499965 1949672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 11:54:05.516505 1949672 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:54:05.521676 1949672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:54:05.533354 1949672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:05.626902 1949672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:54:05.667623 1949672 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022 for IP: 192.168.76.2
	I1217 11:54:05.667648 1949672 certs.go:195] generating shared ca certs ...
	I1217 11:54:05.667678 1949672 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:05.667878 1949672 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:54:05.667942 1949672 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:54:05.667958 1949672 certs.go:257] generating profile certs ...
	I1217 11:54:05.668041 1949672 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.key
	I1217 11:54:05.668063 1949672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.crt with IP's: []
	I1217 11:54:05.836493 1949672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.crt ...
	I1217 11:54:05.836521 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.crt: {Name:mk6d7fcb7a2ad0f3950b9dcf68fb09630ede687c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:05.836703 1949672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.key ...
	I1217 11:54:05.836719 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.key: {Name:mk05036f39c3e70ff9d1cd2a48d6c33d6185c94f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:05.836802 1949672 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a
	I1217 11:54:05.836818 1949672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt.e7b7ff3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 11:54:05.966442 1949672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt.e7b7ff3a ...
	I1217 11:54:05.966472 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt.e7b7ff3a: {Name:mk87ca2f10e9e49dc362b4350b3b634875eba947 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:05.966657 1949672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a ...
	I1217 11:54:05.966673 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a: {Name:mk1805b63a3f52e0c3b884bd061011d971eee143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:05.966751 1949672 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt.e7b7ff3a -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt
	I1217 11:54:05.966831 1949672 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key
	I1217 11:54:05.966887 1949672 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key
	I1217 11:54:05.966905 1949672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt with IP's: []
	I1217 11:54:06.065081 1949672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt ...
	I1217 11:54:06.065106 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt: {Name:mkf1d916f1ba98c0e284ef3c153c52de42ea1866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:06.065288 1949672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key ...
	I1217 11:54:06.065310 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key: {Name:mka65834a6bf35447449dadcc877f37e4dc848f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:06.065579 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:54:06.065638 1949672 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:54:06.065649 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:54:06.065683 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:54:06.065712 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:54:06.065742 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:54:06.065797 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:06.066594 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:54:06.088950 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:54:06.109101 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:54:06.129213 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:54:06.150168 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 11:54:06.170860 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:54:06.191954 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:54:06.213109 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 11:54:06.234297 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:54:06.255826 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:54:06.275863 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:54:06.295791 1949672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:54:06.309480 1949672 ssh_runner.go:195] Run: openssl version
	I1217 11:54:06.316899 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:54:06.325317 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:54:06.333385 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:54:06.337481 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:54:06.337554 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:54:06.375074 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:54:06.384040 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:06.393933 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:54:06.404378 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:06.409030 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:06.409094 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:06.448446 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:54:06.458647 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:54:06.468044 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:54:06.481939 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:54:06.486523 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:54:06.486673 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:54:06.535701 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:54:06.546247 1949672 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:54:06.550918 1949672 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:54:06.550982 1949672 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:06.551089 1949672 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:54:06.551148 1949672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:54:06.586826 1949672 cri.go:89] found id: ""
	I1217 11:54:06.586904 1949672 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:54:06.597284 1949672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:54:06.607377 1949672 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:54:06.607457 1949672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:54:06.616031 1949672 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:54:06.616060 1949672 kubeadm.go:158] found existing configuration files:
	
	I1217 11:54:06.616110 1949672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1217 11:54:06.624903 1949672 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:54:06.624962 1949672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:54:06.633052 1949672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1217 11:54:06.641516 1949672 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:54:06.641595 1949672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:54:06.649603 1949672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1217 11:54:06.657882 1949672 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:54:06.657946 1949672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:54:06.665836 1949672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1217 11:54:06.674140 1949672 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:54:06.674193 1949672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:54:06.681865 1949672 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:54:06.722697 1949672 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 11:54:06.722786 1949672 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:54:06.745112 1949672 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:54:06.745212 1949672 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:54:06.745256 1949672 kubeadm.go:319] OS: Linux
	I1217 11:54:06.745310 1949672 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:54:06.745366 1949672 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:54:06.745427 1949672 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:54:06.745506 1949672 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:54:06.745590 1949672 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:54:06.745647 1949672 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:54:06.745705 1949672 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:54:06.745757 1949672 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 11:54:06.808499 1949672 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:54:06.808685 1949672 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:54:06.808812 1949672 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:54:06.817180 1949672 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:54:03.458943 1952673 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:54:03.459234 1952673 start.go:159] libmachine.API.Create for "newest-cni-601829" (driver="docker")
	I1217 11:54:03.459273 1952673 client.go:173] LocalClient.Create starting
	I1217 11:54:03.459348 1952673 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem
	I1217 11:54:03.459395 1952673 main.go:143] libmachine: Decoding PEM data...
	I1217 11:54:03.459430 1952673 main.go:143] libmachine: Parsing certificate...
	I1217 11:54:03.459514 1952673 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem
	I1217 11:54:03.459573 1952673 main.go:143] libmachine: Decoding PEM data...
	I1217 11:54:03.459590 1952673 main.go:143] libmachine: Parsing certificate...
	I1217 11:54:03.460030 1952673 cli_runner.go:164] Run: docker network inspect newest-cni-601829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:54:03.479513 1952673 cli_runner.go:211] docker network inspect newest-cni-601829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:54:03.479634 1952673 network_create.go:284] running [docker network inspect newest-cni-601829] to gather additional debugging logs...
	I1217 11:54:03.479675 1952673 cli_runner.go:164] Run: docker network inspect newest-cni-601829
	W1217 11:54:03.502906 1952673 cli_runner.go:211] docker network inspect newest-cni-601829 returned with exit code 1
	I1217 11:54:03.502936 1952673 network_create.go:287] error running [docker network inspect newest-cni-601829]: docker network inspect newest-cni-601829: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-601829 not found
	I1217 11:54:03.502950 1952673 network_create.go:289] output of [docker network inspect newest-cni-601829]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-601829 not found
	
	** /stderr **
	I1217 11:54:03.503091 1952673 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:54:03.523660 1952673 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3d92c06bf7e1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:dc:f5:1a:95:c6} reservation:<nil>}
	I1217 11:54:03.524406 1952673 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e34a3db6b97 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:b3:69:9a:9a:9f} reservation:<nil>}
	I1217 11:54:03.525252 1952673 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d8460370d724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:bb:68:9a:9d:ac} reservation:<nil>}
	I1217 11:54:03.525986 1952673 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-009b4cca67d1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:77:e4:db:4d:bd} reservation:<nil>}
	I1217 11:54:03.526880 1952673 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020677c0}
	I1217 11:54:03.526904 1952673 network_create.go:124] attempt to create docker network newest-cni-601829 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 11:54:03.526950 1952673 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-601829 newest-cni-601829
	I1217 11:54:03.585054 1952673 network_create.go:108] docker network newest-cni-601829 192.168.85.0/24 created
	I1217 11:54:03.585095 1952673 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-601829" container
	I1217 11:54:03.585178 1952673 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:54:03.608125 1952673 cli_runner.go:164] Run: docker volume create newest-cni-601829 --label name.minikube.sigs.k8s.io=newest-cni-601829 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:54:03.631958 1952673 oci.go:103] Successfully created a docker volume newest-cni-601829
	I1217 11:54:03.632062 1952673 cli_runner.go:164] Run: docker run --rm --name newest-cni-601829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-601829 --entrypoint /usr/bin/test -v newest-cni-601829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:54:04.135768 1952673 oci.go:107] Successfully prepared a docker volume newest-cni-601829
	I1217 11:54:04.135854 1952673 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:54:04.135872 1952673 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:54:04.135939 1952673 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-601829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:54:07.807238 1952673 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-601829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.67122886s)
	I1217 11:54:07.807280 1952673 kic.go:203] duration metric: took 3.671401541s to extract preloaded images to volume ...
	W1217 11:54:07.807397 1952673 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 11:54:07.807472 1952673 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 11:54:07.807525 1952673 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:54:07.869024 1952673 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-601829 --name newest-cni-601829 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-601829 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-601829 --network newest-cni-601829 --ip 192.168.85.2 --volume newest-cni-601829:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:54:08.203357 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Running}}
	I1217 11:54:08.232006 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:05.999667 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:06.499280 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:06.998715 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:07.499173 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:07.998683 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:08.499367 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:08.611919 1943967 kubeadm.go:1114] duration metric: took 4.739071892s to wait for elevateKubeSystemPrivileges
	I1217 11:54:08.611960 1943967 kubeadm.go:403] duration metric: took 17.38019056s to StartCluster
	I1217 11:54:08.611983 1943967 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:08.612055 1943967 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:08.614611 1943967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:08.615294 1943967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:54:08.615384 1943967 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:54:08.615413 1943967 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:54:08.616522 1943967 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-542273"
	I1217 11:54:08.616579 1943967 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-542273"
	I1217 11:54:08.616757 1943967 host.go:66] Checking if "embed-certs-542273" exists ...
	I1217 11:54:08.616587 1943967 addons.go:70] Setting default-storageclass=true in profile "embed-certs-542273"
	I1217 11:54:08.616838 1943967 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-542273"
	I1217 11:54:08.616663 1943967 config.go:182] Loaded profile config "embed-certs-542273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:54:08.617466 1943967 cli_runner.go:164] Run: docker container inspect embed-certs-542273 --format={{.State.Status}}
	I1217 11:54:08.617515 1943967 cli_runner.go:164] Run: docker container inspect embed-certs-542273 --format={{.State.Status}}
	I1217 11:54:08.619593 1943967 out.go:179] * Verifying Kubernetes components...
	I1217 11:54:08.624886 1943967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:08.656574 1943967 addons.go:239] Setting addon default-storageclass=true in "embed-certs-542273"
	I1217 11:54:08.656739 1943967 host.go:66] Checking if "embed-certs-542273" exists ...
	I1217 11:54:08.657484 1943967 cli_runner.go:164] Run: docker container inspect embed-certs-542273 --format={{.State.Status}}
	I1217 11:54:08.665416 1943967 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:54:08.666939 1943967 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:54:08.666963 1943967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:54:08.667035 1943967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542273
	I1217 11:54:08.702919 1943967 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:54:08.702952 1943967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:54:08.703127 1943967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542273
	I1217 11:54:08.707477 1943967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34606 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/embed-certs-542273/id_rsa Username:docker}
	I1217 11:54:08.734734 1943967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34606 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/embed-certs-542273/id_rsa Username:docker}
	I1217 11:54:08.811622 1943967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:54:08.850787 1943967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:54:08.865097 1943967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:54:08.880382 1943967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:54:09.034420 1943967 start.go:1013] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 11:54:09.035503 1943967 node_ready.go:35] waiting up to 6m0s for node "embed-certs-542273" to be "Ready" ...
	I1217 11:54:09.245302 1943967 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 11:54:06.173650 1938284 node_ready.go:57] node "no-preload-737478" has "Ready":"False" status (will retry)
	W1217 11:54:08.677174 1938284 node_ready.go:57] node "no-preload-737478" has "Ready":"False" status (will retry)
	I1217 11:54:06.918309 1949672 out.go:252]   - Generating certificates and keys ...
	I1217 11:54:06.918424 1949672 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:54:06.918524 1949672 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:54:06.918647 1949672 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:54:07.073056 1949672 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:54:07.251338 1949672 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:54:07.356906 1949672 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:54:07.408562 1949672 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:54:07.408768 1949672 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-382022 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:54:07.519138 1949672 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:54:07.519489 1949672 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-382022 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:54:07.641980 1949672 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:54:07.914870 1949672 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:54:07.976155 1949672 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:54:07.976308 1949672 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:54:08.481301 1949672 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:54:08.662795 1949672 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:54:08.823186 1949672 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:54:09.305092 1949672 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:54:09.513318 1949672 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:54:09.513888 1949672 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:54:09.517635 1949672 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:54:09.519187 1949672 out.go:252]   - Booting up control plane ...
	I1217 11:54:09.519332 1949672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:54:09.519446 1949672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:54:09.519970 1949672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:54:09.553085 1949672 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:54:09.553207 1949672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:54:09.561418 1949672 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:54:09.561661 1949672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:54:09.561750 1949672 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:54:09.680741 1949672 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:54:09.680916 1949672 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:54:09.246521 1943967 addons.go:530] duration metric: took 631.103441ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 11:54:09.538406 1943967 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-542273" context rescaled to 1 replicas
	I1217 11:54:08.261275 1952673 cli_runner.go:164] Run: docker exec newest-cni-601829 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:54:08.311425 1952673 oci.go:144] the created container "newest-cni-601829" has a running status.
	I1217 11:54:08.311462 1952673 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa...
	I1217 11:54:08.398560 1952673 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 11:54:08.434179 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:08.456022 1952673 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:54:08.456043 1952673 kic_runner.go:114] Args: [docker exec --privileged newest-cni-601829 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:54:08.513047 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:08.540098 1952673 machine.go:94] provisionDockerMachine start ...
	I1217 11:54:08.540203 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:08.567108 1952673 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:08.567642 1952673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34616 <nil> <nil>}
	I1217 11:54:08.567718 1952673 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:54:08.568606 1952673 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34910->127.0.0.1:34616: read: connection reset by peer
	I1217 11:54:11.709796 1952673 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-601829
	
	I1217 11:54:11.709846 1952673 ubuntu.go:182] provisioning hostname "newest-cni-601829"
	I1217 11:54:11.709923 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:11.728488 1952673 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:11.728726 1952673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34616 <nil> <nil>}
	I1217 11:54:11.728742 1952673 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-601829 && echo "newest-cni-601829" | sudo tee /etc/hostname
	I1217 11:54:11.876758 1952673 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-601829
	
	I1217 11:54:11.876841 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:11.896951 1952673 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:11.897243 1952673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34616 <nil> <nil>}
	I1217 11:54:11.897262 1952673 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-601829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-601829/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-601829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:54:12.054909 1952673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:54:12.054940 1952673 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:54:12.054985 1952673 ubuntu.go:190] setting up certificates
	I1217 11:54:12.055006 1952673 provision.go:84] configureAuth start
	I1217 11:54:12.055078 1952673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-601829
	I1217 11:54:12.082522 1952673 provision.go:143] copyHostCerts
	I1217 11:54:12.082632 1952673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:54:12.082664 1952673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:54:12.082752 1952673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:54:12.082895 1952673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:54:12.082911 1952673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:54:12.082957 1952673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:54:12.083116 1952673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:54:12.083134 1952673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:54:12.083175 1952673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:54:12.083294 1952673 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.newest-cni-601829 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-601829]
	I1217 11:54:12.116241 1952673 provision.go:177] copyRemoteCerts
	I1217 11:54:12.116317 1952673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:54:12.116413 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:12.143803 1952673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:12.250113 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:54:12.272303 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:54:12.293840 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:54:12.320738 1952673 provision.go:87] duration metric: took 265.71885ms to configureAuth
	I1217 11:54:12.320768 1952673 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:54:12.320994 1952673 config.go:182] Loaded profile config "newest-cni-601829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:12.321118 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:12.345826 1952673 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:12.346154 1952673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34616 <nil> <nil>}
	I1217 11:54:12.346177 1952673 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:54:12.646431 1952673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:54:12.646457 1952673 machine.go:97] duration metric: took 4.106334409s to provisionDockerMachine
	I1217 11:54:12.646472 1952673 client.go:176] duration metric: took 9.187188683s to LocalClient.Create
	I1217 11:54:12.646493 1952673 start.go:167] duration metric: took 9.187260602s to libmachine.API.Create "newest-cni-601829"
	I1217 11:54:12.646501 1952673 start.go:293] postStartSetup for "newest-cni-601829" (driver="docker")
	I1217 11:54:12.646517 1952673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:54:12.646599 1952673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:54:12.646654 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:12.667021 1952673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:12.764462 1952673 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:54:12.768266 1952673 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:54:12.768300 1952673 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:54:12.768313 1952673 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:54:12.768374 1952673 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:54:12.768499 1952673 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:54:12.768663 1952673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:54:12.777157 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:12.798867 1952673 start.go:296] duration metric: took 152.346268ms for postStartSetup
	I1217 11:54:12.799242 1952673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-601829
	I1217 11:54:12.821328 1952673 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/config.json ...
	I1217 11:54:12.821714 1952673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:54:12.821775 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:12.843249 1952673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:12.937731 1952673 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:54:12.943151 1952673 start.go:128] duration metric: took 9.48653629s to createHost
	I1217 11:54:12.943183 1952673 start.go:83] releasing machines lock for "newest-cni-601829", held for 9.486713472s
	I1217 11:54:12.943262 1952673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-601829
	I1217 11:54:12.962925 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:54:12.962980 1952673 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:54:12.962992 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:54:12.963028 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:54:12.963067 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:54:12.963100 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:54:12.963162 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:12.963246 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:54:12.963317 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:12.983016 1952673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:13.122072 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:54:13.142675 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:54:13.160748 1952673 ssh_runner.go:195] Run: openssl version
	I1217 11:54:13.166971 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:54:13.174990 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:54:13.189067 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:54:13.193888 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:54:13.193961 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:54:13.230113 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:54:13.238740 1952673 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16729412.pem /etc/ssl/certs/3ec20f2e.0
	W1217 11:54:11.173180 1938284 node_ready.go:57] node "no-preload-737478" has "Ready":"False" status (will retry)
	I1217 11:54:11.672503 1938284 node_ready.go:49] node "no-preload-737478" is "Ready"
	I1217 11:54:11.672612 1938284 node_ready.go:38] duration metric: took 15.00348409s for node "no-preload-737478" to be "Ready" ...
	I1217 11:54:11.672640 1938284 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:54:11.672697 1938284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:54:11.686423 1938284 api_server.go:72] duration metric: took 15.427831795s to wait for apiserver process to appear ...
	I1217 11:54:11.686450 1938284 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:54:11.686472 1938284 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 11:54:11.692883 1938284 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 11:54:11.694000 1938284 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 11:54:11.694026 1938284 api_server.go:131] duration metric: took 7.568574ms to wait for apiserver health ...
	I1217 11:54:11.694036 1938284 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:54:11.697666 1938284 system_pods.go:59] 8 kube-system pods found
	I1217 11:54:11.697701 1938284 system_pods.go:61] "coredns-7d764666f9-n2kvr" [4f523a12-a03c-4a2e-8e89-0c9d3b51612a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:11.697708 1938284 system_pods.go:61] "etcd-no-preload-737478" [4a83714f-f222-4275-ac67-705d09e6bfc8] Running
	I1217 11:54:11.697721 1938284 system_pods.go:61] "kindnet-fnspp" [4bc59c8f-5cfc-4b84-9560-1de53ffc019e] Running
	I1217 11:54:11.697726 1938284 system_pods.go:61] "kube-apiserver-no-preload-737478" [68885014-38ff-4331-95ea-e8a51a288257] Running
	I1217 11:54:11.697732 1938284 system_pods.go:61] "kube-controller-manager-no-preload-737478" [b4af277c-d573-431d-b6ba-b32bdbdfedc1] Running
	I1217 11:54:11.697737 1938284 system_pods.go:61] "kube-proxy-5tkm8" [d1e1a3b6-95ce-43ee-a816-317f34952c21] Running
	I1217 11:54:11.697742 1938284 system_pods.go:61] "kube-scheduler-no-preload-737478" [ad12c4c1-8a58-4099-9d4c-37b39bd060ef] Running
	I1217 11:54:11.697761 1938284 system_pods.go:61] "storage-provisioner" [ed148111-0f36-4bd0-be78-0f5941b514ee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:11.697773 1938284 system_pods.go:74] duration metric: took 3.728564ms to wait for pod list to return data ...
	I1217 11:54:11.697786 1938284 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:54:11.700360 1938284 default_sa.go:45] found service account: "default"
	I1217 11:54:11.700380 1938284 default_sa.go:55] duration metric: took 2.58861ms for default service account to be created ...
	I1217 11:54:11.700388 1938284 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:54:11.703340 1938284 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:11.703369 1938284 system_pods.go:89] "coredns-7d764666f9-n2kvr" [4f523a12-a03c-4a2e-8e89-0c9d3b51612a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:11.703375 1938284 system_pods.go:89] "etcd-no-preload-737478" [4a83714f-f222-4275-ac67-705d09e6bfc8] Running
	I1217 11:54:11.703382 1938284 system_pods.go:89] "kindnet-fnspp" [4bc59c8f-5cfc-4b84-9560-1de53ffc019e] Running
	I1217 11:54:11.703385 1938284 system_pods.go:89] "kube-apiserver-no-preload-737478" [68885014-38ff-4331-95ea-e8a51a288257] Running
	I1217 11:54:11.703389 1938284 system_pods.go:89] "kube-controller-manager-no-preload-737478" [b4af277c-d573-431d-b6ba-b32bdbdfedc1] Running
	I1217 11:54:11.703393 1938284 system_pods.go:89] "kube-proxy-5tkm8" [d1e1a3b6-95ce-43ee-a816-317f34952c21] Running
	I1217 11:54:11.703398 1938284 system_pods.go:89] "kube-scheduler-no-preload-737478" [ad12c4c1-8a58-4099-9d4c-37b39bd060ef] Running
	I1217 11:54:11.703417 1938284 system_pods.go:89] "storage-provisioner" [ed148111-0f36-4bd0-be78-0f5941b514ee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:11.703462 1938284 retry.go:31] will retry after 238.206881ms: missing components: kube-dns
	I1217 11:54:11.947957 1938284 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:11.948003 1938284 system_pods.go:89] "coredns-7d764666f9-n2kvr" [4f523a12-a03c-4a2e-8e89-0c9d3b51612a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:11.948013 1938284 system_pods.go:89] "etcd-no-preload-737478" [4a83714f-f222-4275-ac67-705d09e6bfc8] Running
	I1217 11:54:11.948021 1938284 system_pods.go:89] "kindnet-fnspp" [4bc59c8f-5cfc-4b84-9560-1de53ffc019e] Running
	I1217 11:54:11.948036 1938284 system_pods.go:89] "kube-apiserver-no-preload-737478" [68885014-38ff-4331-95ea-e8a51a288257] Running
	I1217 11:54:11.948042 1938284 system_pods.go:89] "kube-controller-manager-no-preload-737478" [b4af277c-d573-431d-b6ba-b32bdbdfedc1] Running
	I1217 11:54:11.948047 1938284 system_pods.go:89] "kube-proxy-5tkm8" [d1e1a3b6-95ce-43ee-a816-317f34952c21] Running
	I1217 11:54:11.948052 1938284 system_pods.go:89] "kube-scheduler-no-preload-737478" [ad12c4c1-8a58-4099-9d4c-37b39bd060ef] Running
	I1217 11:54:11.948059 1938284 system_pods.go:89] "storage-provisioner" [ed148111-0f36-4bd0-be78-0f5941b514ee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:11.948086 1938284 retry.go:31] will retry after 336.185113ms: missing components: kube-dns
	I1217 11:54:12.289119 1938284 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:12.289156 1938284 system_pods.go:89] "coredns-7d764666f9-n2kvr" [4f523a12-a03c-4a2e-8e89-0c9d3b51612a] Running
	I1217 11:54:12.289164 1938284 system_pods.go:89] "etcd-no-preload-737478" [4a83714f-f222-4275-ac67-705d09e6bfc8] Running
	I1217 11:54:12.289169 1938284 system_pods.go:89] "kindnet-fnspp" [4bc59c8f-5cfc-4b84-9560-1de53ffc019e] Running
	I1217 11:54:12.289175 1938284 system_pods.go:89] "kube-apiserver-no-preload-737478" [68885014-38ff-4331-95ea-e8a51a288257] Running
	I1217 11:54:12.289181 1938284 system_pods.go:89] "kube-controller-manager-no-preload-737478" [b4af277c-d573-431d-b6ba-b32bdbdfedc1] Running
	I1217 11:54:12.289186 1938284 system_pods.go:89] "kube-proxy-5tkm8" [d1e1a3b6-95ce-43ee-a816-317f34952c21] Running
	I1217 11:54:12.289192 1938284 system_pods.go:89] "kube-scheduler-no-preload-737478" [ad12c4c1-8a58-4099-9d4c-37b39bd060ef] Running
	I1217 11:54:12.289197 1938284 system_pods.go:89] "storage-provisioner" [ed148111-0f36-4bd0-be78-0f5941b514ee] Running
	I1217 11:54:12.289208 1938284 system_pods.go:126] duration metric: took 588.813191ms to wait for k8s-apps to be running ...
	I1217 11:54:12.289222 1938284 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:54:12.289271 1938284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:54:12.305816 1938284 system_svc.go:56] duration metric: took 16.582374ms WaitForService to wait for kubelet
	I1217 11:54:12.305853 1938284 kubeadm.go:587] duration metric: took 16.047266093s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:54:12.305877 1938284 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:54:12.309808 1938284 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:54:12.309845 1938284 node_conditions.go:123] node cpu capacity is 8
	I1217 11:54:12.309872 1938284 node_conditions.go:105] duration metric: took 3.989067ms to run NodePressure ...
	I1217 11:54:12.309890 1938284 start.go:242] waiting for startup goroutines ...
	I1217 11:54:12.309905 1938284 start.go:247] waiting for cluster config update ...
	I1217 11:54:12.309920 1938284 start.go:256] writing updated cluster config ...
	I1217 11:54:12.310266 1938284 ssh_runner.go:195] Run: rm -f paused
	I1217 11:54:12.315325 1938284 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:54:12.319964 1938284 pod_ready.go:83] waiting for pod "coredns-7d764666f9-n2kvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.325049 1938284 pod_ready.go:94] pod "coredns-7d764666f9-n2kvr" is "Ready"
	I1217 11:54:12.325078 1938284 pod_ready.go:86] duration metric: took 5.090952ms for pod "coredns-7d764666f9-n2kvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.327456 1938284 pod_ready.go:83] waiting for pod "etcd-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.332285 1938284 pod_ready.go:94] pod "etcd-no-preload-737478" is "Ready"
	I1217 11:54:12.332311 1938284 pod_ready.go:86] duration metric: took 4.824504ms for pod "etcd-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.334673 1938284 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.339229 1938284 pod_ready.go:94] pod "kube-apiserver-no-preload-737478" is "Ready"
	I1217 11:54:12.339255 1938284 pod_ready.go:86] duration metric: took 4.556242ms for pod "kube-apiserver-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.341989 1938284 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.720803 1938284 pod_ready.go:94] pod "kube-controller-manager-no-preload-737478" is "Ready"
	I1217 11:54:12.720842 1938284 pod_ready.go:86] duration metric: took 378.825955ms for pod "kube-controller-manager-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.920663 1938284 pod_ready.go:83] waiting for pod "kube-proxy-5tkm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:13.320359 1938284 pod_ready.go:94] pod "kube-proxy-5tkm8" is "Ready"
	I1217 11:54:13.320394 1938284 pod_ready.go:86] duration metric: took 399.697758ms for pod "kube-proxy-5tkm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:13.520652 1938284 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:13.919862 1938284 pod_ready.go:94] pod "kube-scheduler-no-preload-737478" is "Ready"
	I1217 11:54:13.919890 1938284 pod_ready.go:86] duration metric: took 399.210577ms for pod "kube-scheduler-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:13.919902 1938284 pod_ready.go:40] duration metric: took 1.60454372s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:54:13.978651 1938284 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 11:54:13.981553 1938284 out.go:179] * Done! kubectl is now configured to use "no-preload-737478" cluster and "default" namespace by default
	I1217 11:54:13.247996 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:13.256422 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:54:13.264316 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:13.268128 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:13.268203 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:13.315817 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:54:13.326466 1952673 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:54:13.336552 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:54:13.346156 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:54:13.355310 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:54:13.359830 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:54:13.359905 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:54:13.396396 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:54:13.405066 1952673 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1672941.pem /etc/ssl/certs/51391683.0
	I1217 11:54:13.413671 1952673 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:54:13.417627 1952673 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:54:13.421749 1952673 ssh_runner.go:195] Run: cat /version.json
	I1217 11:54:13.421825 1952673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:54:13.425903 1952673 ssh_runner.go:195] Run: systemctl --version
	I1217 11:54:13.481905 1952673 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:54:13.519576 1952673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:54:13.524797 1952673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:54:13.524870 1952673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:54:13.552127 1952673 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 11:54:13.552154 1952673 start.go:496] detecting cgroup driver to use...
	I1217 11:54:13.552188 1952673 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:54:13.552229 1952673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:54:13.568824 1952673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:54:13.582024 1952673 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:54:13.582074 1952673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:54:13.600261 1952673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:54:13.618695 1952673 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:54:13.713205 1952673 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:54:13.806289 1952673 docker.go:234] disabling docker service ...
	I1217 11:54:13.806350 1952673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:54:13.825391 1952673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:54:13.839145 1952673 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:54:13.929113 1952673 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:54:14.022583 1952673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:54:14.036976 1952673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:54:14.053712 1952673 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:54:14.053781 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.065380 1952673 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:54:14.065452 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.076430 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.088712 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.101299 1952673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:54:14.112583 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.124279 1952673 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.142846 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.155367 1952673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:54:14.166460 1952673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:54:14.175467 1952673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:14.272925 1952673 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:54:14.445955 1952673 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:54:14.446026 1952673 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:54:14.451129 1952673 start.go:564] Will wait 60s for crictl version
	I1217 11:54:14.451199 1952673 ssh_runner.go:195] Run: which crictl
	I1217 11:54:14.455481 1952673 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:54:14.489601 1952673 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:54:14.489698 1952673 ssh_runner.go:195] Run: crio --version
	I1217 11:54:14.526616 1952673 ssh_runner.go:195] Run: crio --version
	I1217 11:54:14.563866 1952673 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 11:54:14.565422 1952673 cli_runner.go:164] Run: docker network inspect newest-cni-601829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:54:14.588365 1952673 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 11:54:14.593154 1952673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:54:14.607614 1952673 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 11:54:10.674768 1949672 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00125244s
	I1217 11:54:10.677769 1949672 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 11:54:10.677900 1949672 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1217 11:54:10.678018 1949672 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 11:54:10.678095 1949672 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 11:54:12.221193 1949672 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.543360868s
	I1217 11:54:13.132964 1949672 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.455221403s
	I1217 11:54:15.179643 1949672 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501810323s
	I1217 11:54:15.198199 1949672 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:54:15.210401 1949672 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:54:15.219215 1949672 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:54:15.219595 1949672 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-382022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:54:15.230278 1949672 kubeadm.go:319] [bootstrap-token] Using token: 18j5vb.prvjek6drow03x0n
	W1217 11:54:11.040124 1943967 node_ready.go:57] node "embed-certs-542273" has "Ready":"False" status (will retry)
	W1217 11:54:13.539330 1943967 node_ready.go:57] node "embed-certs-542273" has "Ready":"False" status (will retry)
	I1217 11:54:14.608788 1952673 kubeadm.go:884] updating cluster {Name:newest-cni-601829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:54:14.608936 1952673 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:54:14.609001 1952673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:54:14.652078 1952673 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:54:14.652104 1952673 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:54:14.652164 1952673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:54:14.687496 1952673 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:54:14.687525 1952673 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:54:14.687568 1952673 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 11:54:14.687682 1952673 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-601829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:54:14.687756 1952673 ssh_runner.go:195] Run: crio config
	I1217 11:54:14.742034 1952673 cni.go:84] Creating CNI manager for ""
	I1217 11:54:14.742062 1952673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:14.742083 1952673 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 11:54:14.742111 1952673 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-601829 NodeName:newest-cni-601829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:54:14.742272 1952673 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-601829"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:54:14.742362 1952673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:54:14.750995 1952673 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:54:14.751068 1952673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:54:14.759143 1952673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 11:54:14.772329 1952673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:54:14.789302 1952673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1217 11:54:14.805896 1952673 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:54:14.809821 1952673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:54:14.820272 1952673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:14.907791 1952673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:54:14.938715 1952673 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829 for IP: 192.168.85.2
	I1217 11:54:14.938738 1952673 certs.go:195] generating shared ca certs ...
	I1217 11:54:14.938761 1952673 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:14.938910 1952673 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:54:14.938956 1952673 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:54:14.938966 1952673 certs.go:257] generating profile certs ...
	I1217 11:54:14.939041 1952673 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.key
	I1217 11:54:14.939067 1952673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.crt with IP's: []
	I1217 11:54:15.002286 1952673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.crt ...
	I1217 11:54:15.002315 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.crt: {Name:mkb123ab6040f3a23d0c5bc4863b7319ee083bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.002485 1952673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.key ...
	I1217 11:54:15.002496 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.key: {Name:mk3e7b7710383da310c6507eba0176edaaab2dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.002610 1952673 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key.fded5a9c
	I1217 11:54:15.002628 1952673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt.fded5a9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 11:54:15.135575 1952673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt.fded5a9c ...
	I1217 11:54:15.135607 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt.fded5a9c: {Name:mk2201be273856597b4d2ae93ea533ac20a42c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.135777 1952673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key.fded5a9c ...
	I1217 11:54:15.135790 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key.fded5a9c: {Name:mk7b1d8b442a5de29a306b93e98efce5c9fba488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.135872 1952673 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt.fded5a9c -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt
	I1217 11:54:15.135948 1952673 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key.fded5a9c -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key
	I1217 11:54:15.136019 1952673 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.key
	I1217 11:54:15.136035 1952673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.crt with IP's: []
	I1217 11:54:15.182803 1952673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.crt ...
	I1217 11:54:15.182833 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.crt: {Name:mk251d73ccfaf6668c2ffd35a465891b1c2424b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.182998 1952673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.key ...
	I1217 11:54:15.183017 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.key: {Name:mk70c91c0d7d67b0b7a8ca66d601cb6b7aac8ad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.183226 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:54:15.183282 1952673 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:54:15.183300 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:54:15.183343 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:54:15.183385 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:54:15.183424 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:54:15.183485 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:15.184142 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:54:15.206920 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:54:15.230085 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:54:15.253075 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:54:15.272687 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:54:15.290673 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:54:15.309839 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:54:15.328916 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 11:54:15.348364 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:54:15.367098 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:54:15.385130 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:54:15.405047 1952673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:54:15.418199 1952673 ssh_runner.go:195] Run: openssl version
	I1217 11:54:15.425226 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:54:15.432920 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:54:15.440771 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:54:15.445068 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:54:15.445119 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:54:15.483365 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:54:15.491381 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:15.499051 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:54:15.507373 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:15.511249 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:15.511306 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:15.548124 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:54:15.557121 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:54:15.564716 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:54:15.572596 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:54:15.577173 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:54:15.577223 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:54:15.616973 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:54:15.628112 1952673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:54:15.633805 1952673 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:54:15.633874 1952673 kubeadm.go:401] StartCluster: {Name:newest-cni-601829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:15.633980 1952673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:54:15.634044 1952673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:54:15.666106 1952673 cri.go:89] found id: ""
	I1217 11:54:15.666194 1952673 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:54:15.674869 1952673 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:54:15.683089 1952673 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:54:15.683173 1952673 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:54:15.691133 1952673 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:54:15.691156 1952673 kubeadm.go:158] found existing configuration files:
	
	I1217 11:54:15.691208 1952673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:54:15.699133 1952673 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:54:15.699195 1952673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:54:15.707630 1952673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:54:15.716211 1952673 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:54:15.716270 1952673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:54:15.723991 1952673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:54:15.732030 1952673 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:54:15.732091 1952673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:54:15.740086 1952673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:54:15.749785 1952673 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:54:15.749851 1952673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:54:15.759003 1952673 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:54:15.802551 1952673 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:54:15.802633 1952673 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:54:15.896842 1952673 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:54:15.896913 1952673 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:54:15.896945 1952673 kubeadm.go:319] OS: Linux
	I1217 11:54:15.896985 1952673 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:54:15.897045 1952673 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:54:15.897133 1952673 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:54:15.897249 1952673 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:54:15.897336 1952673 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:54:15.897416 1952673 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:54:15.897493 1952673 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:54:15.897614 1952673 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 11:54:15.962567 1952673 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:54:15.962703 1952673 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:54:15.962857 1952673 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:54:15.984824 1952673 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:54:15.232362 1949672 out.go:252]   - Configuring RBAC rules ...
	I1217 11:54:15.232571 1949672 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:54:15.235683 1949672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:54:15.241766 1949672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:54:15.244638 1949672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:54:15.247503 1949672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:54:15.250778 1949672 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:54:15.587309 1949672 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:54:16.005121 1949672 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:54:16.586080 1949672 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:54:16.587160 1949672 kubeadm.go:319] 
	I1217 11:54:16.587265 1949672 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:54:16.587283 1949672 kubeadm.go:319] 
	I1217 11:54:16.587404 1949672 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:54:16.587422 1949672 kubeadm.go:319] 
	I1217 11:54:16.587471 1949672 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:54:16.587572 1949672 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:54:16.587653 1949672 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:54:16.587664 1949672 kubeadm.go:319] 
	I1217 11:54:16.587749 1949672 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:54:16.587759 1949672 kubeadm.go:319] 
	I1217 11:54:16.587827 1949672 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:54:16.587837 1949672 kubeadm.go:319] 
	I1217 11:54:16.587906 1949672 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:54:16.588006 1949672 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:54:16.588072 1949672 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:54:16.588078 1949672 kubeadm.go:319] 
	I1217 11:54:16.588154 1949672 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:54:16.588241 1949672 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:54:16.588267 1949672 kubeadm.go:319] 
	I1217 11:54:16.588342 1949672 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 18j5vb.prvjek6drow03x0n \
	I1217 11:54:16.588433 1949672 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:54:16.588459 1949672 kubeadm.go:319] 	--control-plane 
	I1217 11:54:16.588469 1949672 kubeadm.go:319] 
	I1217 11:54:16.588560 1949672 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:54:16.588569 1949672 kubeadm.go:319] 
	I1217 11:54:16.588669 1949672 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 18j5vb.prvjek6drow03x0n \
	I1217 11:54:16.588781 1949672 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:54:16.591660 1949672 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:54:16.591793 1949672 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:54:16.591823 1949672 cni.go:84] Creating CNI manager for ""
	I1217 11:54:16.591840 1949672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:16.594322 1949672 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 11:54:15.986950 1952673 out.go:252]   - Generating certificates and keys ...
	I1217 11:54:15.987059 1952673 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:54:15.989058 1952673 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:54:16.052493 1952673 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:54:16.087998 1952673 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:54:16.320203 1952673 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:54:16.510181 1952673 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:54:16.624810 1952673 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:54:16.625028 1952673 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-601829] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:54:16.722070 1952673 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:54:16.722281 1952673 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-601829] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:54:16.805502 1952673 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:54:16.882349 1952673 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:54:16.913704 1952673 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:54:16.913837 1952673 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:54:17.280943 1952673 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:54:17.360084 1952673 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:54:17.475256 1952673 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:54:17.716171 1952673 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:54:17.898338 1952673 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:54:17.898971 1952673 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:54:17.903068 1952673 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:54:17.904547 1952673 out.go:252]   - Booting up control plane ...
	I1217 11:54:17.904654 1952673 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:54:17.904723 1952673 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:54:17.905383 1952673 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:54:17.920082 1952673 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:54:17.920204 1952673 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:54:17.927602 1952673 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:54:17.927944 1952673 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:54:17.928020 1952673 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:54:18.044594 1952673 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:54:18.044705 1952673 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:54:16.595304 1949672 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:54:16.599717 1949672 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 11:54:16.599734 1949672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:54:16.614847 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:54:16.838248 1949672 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:54:16.838518 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-382022 minikube.k8s.io/updated_at=2025_12_17T11_54_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=default-k8s-diff-port-382022 minikube.k8s.io/primary=true
	I1217 11:54:16.838573 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:16.851345 1949672 ops.go:34] apiserver oom_adj: -16
	I1217 11:54:16.923207 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:17.424221 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:17.923417 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:18.423853 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:18.924195 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:19.424070 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:19.923885 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 11:54:16.038490 1943967 node_ready.go:57] node "embed-certs-542273" has "Ready":"False" status (will retry)
	W1217 11:54:18.038752 1943967 node_ready.go:57] node "embed-certs-542273" has "Ready":"False" status (will retry)
	W1217 11:54:20.040515 1943967 node_ready.go:57] node "embed-certs-542273" has "Ready":"False" status (will retry)
	I1217 11:54:20.424003 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:20.924077 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:21.424694 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:21.506730 1949672 kubeadm.go:1114] duration metric: took 4.668466133s to wait for elevateKubeSystemPrivileges
	I1217 11:54:21.506770 1949672 kubeadm.go:403] duration metric: took 14.955794098s to StartCluster
	I1217 11:54:21.506793 1949672 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:21.506897 1949672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:21.508757 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:21.509017 1949672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:54:21.509046 1949672 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:54:21.509125 1949672 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:54:21.509272 1949672 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-382022"
	I1217 11:54:21.509302 1949672 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:54:21.509302 1949672 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-382022"
	I1217 11:54:21.509342 1949672 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-382022"
	I1217 11:54:21.509308 1949672 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-382022"
	I1217 11:54:21.509396 1949672 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:54:21.509798 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:21.509989 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:21.510630 1949672 out.go:179] * Verifying Kubernetes components...
	I1217 11:54:21.512170 1949672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:21.539888 1949672 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:54:21.541227 1949672 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:54:21.541249 1949672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:54:21.541318 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:21.541550 1949672 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-382022"
	I1217 11:54:21.541595 1949672 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:54:21.542100 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:21.571890 1949672 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:54:21.571915 1949672 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:54:21.571979 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:21.580691 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:21.606881 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:21.626407 1949672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:54:21.703715 1949672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:54:21.711434 1949672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:54:21.731973 1949672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:54:21.827947 1949672 start.go:1013] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 11:54:21.831517 1949672 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-382022" to be "Ready" ...
	I1217 11:54:22.077861 1949672 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 11:54:18.546353 1952673 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.881861ms
	I1217 11:54:18.549247 1952673 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 11:54:18.549365 1952673 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1217 11:54:18.549491 1952673 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 11:54:18.549621 1952673 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 11:54:19.555157 1952673 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005759004s
	I1217 11:54:20.182847 1952673 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.633392056s
	I1217 11:54:22.052028 1952673 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502166221s
	I1217 11:54:22.073347 1952673 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:54:22.086259 1952673 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:54:22.096361 1952673 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:54:22.096703 1952673 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-601829 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:54:22.105184 1952673 kubeadm.go:319] [bootstrap-token] Using token: oaw54k.najt1dba7pujt8tu
	
	
	==> CRI-O <==
	Dec 17 11:54:11 no-preload-737478 crio[805]: time="2025-12-17T11:54:11.975121023Z" level=info msg="Started container" PID=2806 containerID=32ee50996e202150378582c4d330e089cab3031ef740ac41472e640a7301945d description=kube-system/storage-provisioner/storage-provisioner id=fe11591f-50d3-4173-a8f5-1ec541a7e498 name=/runtime.v1.RuntimeService/StartContainer sandboxID=35a4357f0165e91b644ab3073738664cd847c6f0d2ba435449adbae7eb10151a
	Dec 17 11:54:11 no-preload-737478 crio[805]: time="2025-12-17T11:54:11.976773021Z" level=info msg="Started container" PID=2809 containerID=41f3712ea2f12de2b4401c07948c2652d34fe4b6492fcd65a715c8c50c19cae7 description=kube-system/coredns-7d764666f9-n2kvr/coredns id=6a6508c7-e2b5-480b-a45b-f3a7a5dcf19a name=/runtime.v1.RuntimeService/StartContainer sandboxID=32e2c072f29f55b5c00b0e2ea39d012c81052fc004d4df43caf2e20b29188bb0
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.458079764Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6a611a8d-fee2-4f37-8b4e-4d1927af036f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.458170558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.464146322Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3af0ee4d29156c222983768882a92f30ca7f90f8ac3cc84698f81a23eecff460 UID:5812a4e7-2a1f-4e57-a29f-bf4c78d30ffd NetNS:/var/run/netns/5b0e7736-a47a-4035-b9cb-bb190e8715c0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000313020}] Aliases:map[]}"
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.464197854Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.477553011Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3af0ee4d29156c222983768882a92f30ca7f90f8ac3cc84698f81a23eecff460 UID:5812a4e7-2a1f-4e57-a29f-bf4c78d30ffd NetNS:/var/run/netns/5b0e7736-a47a-4035-b9cb-bb190e8715c0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000313020}] Aliases:map[]}"
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.477755097Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.478855719Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.47982328Z" level=info msg="Ran pod sandbox 3af0ee4d29156c222983768882a92f30ca7f90f8ac3cc84698f81a23eecff460 with infra container: default/busybox/POD" id=6a611a8d-fee2-4f37-8b4e-4d1927af036f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.481351944Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=191d9a46-acc7-4463-bf3b-a591d0c11f27 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.481502131Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=191d9a46-acc7-4463-bf3b-a591d0c11f27 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.48156702Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=191d9a46-acc7-4463-bf3b-a591d0c11f27 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.482475293Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0413103b-1790-4ed1-9951-fab040f9a878 name=/runtime.v1.ImageService/PullImage
	Dec 17 11:54:14 no-preload-737478 crio[805]: time="2025-12-17T11:54:14.484136927Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 11:54:16 no-preload-737478 crio[805]: time="2025-12-17T11:54:16.417755629Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0413103b-1790-4ed1-9951-fab040f9a878 name=/runtime.v1.ImageService/PullImage
	Dec 17 11:54:16 no-preload-737478 crio[805]: time="2025-12-17T11:54:16.418372426Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=04ed6dbc-3aac-4337-b5d4-86c5d758b139 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:16 no-preload-737478 crio[805]: time="2025-12-17T11:54:16.420164696Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0fbbb930-79b9-4dec-981b-1e5339773f65 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:16 no-preload-737478 crio[805]: time="2025-12-17T11:54:16.423587056Z" level=info msg="Creating container: default/busybox/busybox" id=e69ee5a6-16d5-4568-a64f-d10bea9bb48d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:16 no-preload-737478 crio[805]: time="2025-12-17T11:54:16.423756225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:16 no-preload-737478 crio[805]: time="2025-12-17T11:54:16.427481779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:16 no-preload-737478 crio[805]: time="2025-12-17T11:54:16.42795824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:16 no-preload-737478 crio[805]: time="2025-12-17T11:54:16.457724406Z" level=info msg="Created container 8a7a0a049058a26686286f4902a9d31dbd1193db418049c7a744d86cfc2eaac1: default/busybox/busybox" id=e69ee5a6-16d5-4568-a64f-d10bea9bb48d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:16 no-preload-737478 crio[805]: time="2025-12-17T11:54:16.458476046Z" level=info msg="Starting container: 8a7a0a049058a26686286f4902a9d31dbd1193db418049c7a744d86cfc2eaac1" id=73ab00fe-32d3-4f1a-a7ea-ff838f17130d name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:54:16 no-preload-737478 crio[805]: time="2025-12-17T11:54:16.460393577Z" level=info msg="Started container" PID=2880 containerID=8a7a0a049058a26686286f4902a9d31dbd1193db418049c7a744d86cfc2eaac1 description=default/busybox/busybox id=73ab00fe-32d3-4f1a-a7ea-ff838f17130d name=/runtime.v1.RuntimeService/StartContainer sandboxID=3af0ee4d29156c222983768882a92f30ca7f90f8ac3cc84698f81a23eecff460
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8a7a0a049058a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   3af0ee4d29156       busybox                                     default
	41f3712ea2f12       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   32e2c072f29f5       coredns-7d764666f9-n2kvr                    kube-system
	32ee50996e202       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   35a4357f0165e       storage-provisioner                         kube-system
	2e66d35aef68d       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   de27cf07f0594       kindnet-fnspp                               kube-system
	741f3ed620e4a       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      27 seconds ago      Running             kube-proxy                0                   43d39aa83b8d8       kube-proxy-5tkm8                            kube-system
	d0b1060d202d2       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      37 seconds ago      Running             kube-controller-manager   0                   98fe8f7050e5b       kube-controller-manager-no-preload-737478   kube-system
	bdddf28c125a5       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      37 seconds ago      Running             kube-scheduler            0                   baf1eb0e65cf6       kube-scheduler-no-preload-737478            kube-system
	ab84adbce19ab       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      37 seconds ago      Running             etcd                      0                   1f57fb1d121e7       etcd-no-preload-737478                      kube-system
	b2cde69eea919       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                      37 seconds ago      Running             kube-apiserver            0                   b1ff172210ca0       kube-apiserver-no-preload-737478            kube-system
	
	
	==> coredns [41f3712ea2f12de2b4401c07948c2652d34fe4b6492fcd65a715c8c50c19cae7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49497 - 30617 "HINFO IN 2202709847511295883.5017671121378932995. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023873513s
	
	
	==> describe nodes <==
	Name:               no-preload-737478
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-737478
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=no-preload-737478
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_53_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:53:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-737478
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:54:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:54:21 +0000   Wed, 17 Dec 2025 11:53:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:54:21 +0000   Wed, 17 Dec 2025 11:53:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:54:21 +0000   Wed, 17 Dec 2025 11:53:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:54:21 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-737478
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                247c8806-279e-4c7a-81b2-36bc1da2ec08
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-n2kvr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-737478                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-fnspp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-737478             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-737478    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-5tkm8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-737478             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node no-preload-737478 event: Registered Node no-preload-737478 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [ab84adbce19ab5c15b5ffada965e5ced72a243bcb28fe77839a2836196a01f30] <==
	{"level":"info","ts":"2025-12-17T11:53:46.933055Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T11:53:46.933100Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:53:46.933127Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-17T11:53:46.933141Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T11:53:46.933937Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-737478 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T11:53:46.934015Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:53:46.934043Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:53:46.934211Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T11:53:46.934240Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T11:53:46.934264Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T11:53:46.935202Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T11:53:46.935262Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:53:46.935344Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T11:53:46.935397Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T11:53:46.935388Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:53:46.935453Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-17T11:53:46.935601Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-17T11:53:46.939142Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T11:53:46.939212Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-17T11:53:49.562359Z","caller":"traceutil/trace.go:172","msg":"trace[886384273] transaction","detail":"{read_only:false; response_revision:146; number_of_response:1; }","duration":"121.047713ms","start":"2025-12-17T11:53:49.441282Z","end":"2025-12-17T11:53:49.562330Z","steps":["trace[886384273] 'process raft request'  (duration: 51.159133ms)","trace[886384273] 'compare'  (duration: 69.78357ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:53:59.681835Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.837431ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790741055877878 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-no-preload-737478\" mod_revision:384 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-737478\" value_size:7838 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-737478\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T11:53:59.681991Z","caller":"traceutil/trace.go:172","msg":"trace[1971597284] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"375.306546ms","start":"2025-12-17T11:53:59.306659Z","end":"2025-12-17T11:53:59.681965Z","steps":["trace[1971597284] 'process raft request'  (duration: 121.800375ms)","trace[1971597284] 'compare'  (duration: 252.725011ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:53:59.682104Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T11:53:59.306608Z","time spent":"375.444346ms","remote":"127.0.0.1:33842","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7905,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-no-preload-737478\" mod_revision:384 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-737478\" value_size:7838 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-737478\" > >"}
	{"level":"warn","ts":"2025-12-17T11:54:00.047025Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.361326ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.103.2\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-12-17T11:54:00.047101Z","caller":"traceutil/trace.go:172","msg":"trace[444307015] range","detail":"{range_begin:/registry/masterleases/192.168.103.2; range_end:; response_count:1; response_revision:385; }","duration":"166.458496ms","start":"2025-12-17T11:53:59.880627Z","end":"2025-12-17T11:54:00.047085Z","steps":["trace[444307015] 'range keys from in-memory index tree'  (duration: 166.196127ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:54:24 up  5:36,  0 user,  load average: 5.74, 3.43, 2.22
	Linux no-preload-737478 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2e66d35aef68defb06c306801e36b814dde73a79e4f9e385985efcfa43b42296] <==
	I1217 11:54:01.120359       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:54:01.120786       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 11:54:01.120942       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:54:01.121040       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:54:01.121092       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:54:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:54:01.330464       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:54:01.330503       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:54:01.330516       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:54:01.331255       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:54:01.916130       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:54:01.916163       1 metrics.go:72] Registering metrics
	I1217 11:54:01.916234       1 controller.go:711] "Syncing nftables rules"
	I1217 11:54:11.328793       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:54:11.328882       1 main.go:301] handling current node
	I1217 11:54:21.331926       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:54:21.331985       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b2cde69eea919f0dc47029fb255be411620afba2f8939d2e0519fda551adb827] <==
	I1217 11:53:48.115520       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1217 11:53:48.118824       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:53:48.119052       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1217 11:53:48.123021       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1217 11:53:48.130041       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1217 11:53:48.178404       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:53:48.281728       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:53:48.982598       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1217 11:53:48.988509       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1217 11:53:48.988563       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 11:53:49.790287       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:53:49.835238       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:53:49.885181       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 11:53:49.892118       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1217 11:53:49.893459       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:53:49.898276       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:53:50.007599       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:53:51.070282       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:53:51.081449       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 11:53:51.089738       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 11:53:55.614405       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:53:55.620424       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:53:55.810683       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:53:56.010094       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1217 11:54:22.256457       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:49714: use of closed network connection
	
	
	==> kube-controller-manager [d0b1060d202d2a9e85c4dca4a7df4df0e4205053d1a88592c884011fe7ecc7a8] <==
	I1217 11:53:54.817850       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.817778       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818352       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818365       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818374       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818421       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818446       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818459       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818469       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818496       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818522       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818567       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.818602       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.821099       1 range_allocator.go:177] "Sending events to api server"
	I1217 11:53:54.821155       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 11:53:54.821165       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:53:54.821174       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.837625       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:53:54.843281       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.845415       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-737478" podCIDRs=["10.244.0.0/24"]
	I1217 11:53:54.917744       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:54.917769       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 11:53:54.917775       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 11:53:54.938554       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:14.819100       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [741f3ed620e4ad261646bda252dbbefab056ab0bcdf5132696e667dd3dba48e7] <==
	I1217 11:53:56.585807       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:53:56.703091       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:53:56.805091       1 shared_informer.go:377] "Caches are synced"
	I1217 11:53:56.805135       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 11:53:56.805327       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:53:56.834375       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:53:56.834455       1 server_linux.go:136] "Using iptables Proxier"
	I1217 11:53:56.841390       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:53:56.841937       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 11:53:56.842019       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:53:56.843790       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:53:56.843810       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:53:56.843833       1 config.go:200] "Starting service config controller"
	I1217 11:53:56.843839       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:53:56.844104       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:53:56.844114       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:53:56.844186       1 config.go:309] "Starting node config controller"
	I1217 11:53:56.844192       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:53:56.844198       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:53:56.944262       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:53:56.944305       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:53:56.944370       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [bdddf28c125a5217710da91f0eb10168ef9dec0dfb57105f14543d8f82faecbb] <==
	E1217 11:53:48.038939       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 11:53:48.038978       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1217 11:53:48.038981       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 11:53:48.039080       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 11:53:48.039326       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 11:53:48.039438       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 11:53:48.039439       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 11:53:48.861272       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 11:53:48.862359       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 11:53:48.937671       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 11:53:48.980477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 11:53:49.076556       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 11:53:49.080727       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1217 11:53:49.167231       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 11:53:49.171968       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 11:53:49.216987       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 11:53:49.312241       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 11:53:49.317757       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1217 11:53:49.367484       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 11:53:49.424577       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 11:53:49.424668       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 11:53:49.444312       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 11:53:49.504074       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1217 11:53:49.518734       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I1217 11:53:51.831255       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 11:53:56 no-preload-737478 kubelet[2228]: I1217 11:53:56.067474    2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4bc59c8f-5cfc-4b84-9560-1de53ffc019e-cni-cfg\") pod \"kindnet-fnspp\" (UID: \"4bc59c8f-5cfc-4b84-9560-1de53ffc019e\") " pod="kube-system/kindnet-fnspp"
	Dec 17 11:53:56 no-preload-737478 kubelet[2228]: I1217 11:53:56.067567    2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6qsq\" (UniqueName: \"kubernetes.io/projected/4bc59c8f-5cfc-4b84-9560-1de53ffc019e-kube-api-access-c6qsq\") pod \"kindnet-fnspp\" (UID: \"4bc59c8f-5cfc-4b84-9560-1de53ffc019e\") " pod="kube-system/kindnet-fnspp"
	Dec 17 11:53:56 no-preload-737478 kubelet[2228]: I1217 11:53:56.067596    2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1e1a3b6-95ce-43ee-a816-317f34952c21-kube-proxy\") pod \"kube-proxy-5tkm8\" (UID: \"d1e1a3b6-95ce-43ee-a816-317f34952c21\") " pod="kube-system/kube-proxy-5tkm8"
	Dec 17 11:53:56 no-preload-737478 kubelet[2228]: I1217 11:53:56.067661    2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bc59c8f-5cfc-4b84-9560-1de53ffc019e-xtables-lock\") pod \"kindnet-fnspp\" (UID: \"4bc59c8f-5cfc-4b84-9560-1de53ffc019e\") " pod="kube-system/kindnet-fnspp"
	Dec 17 11:53:56 no-preload-737478 kubelet[2228]: I1217 11:53:56.067733    2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bc59c8f-5cfc-4b84-9560-1de53ffc019e-lib-modules\") pod \"kindnet-fnspp\" (UID: \"4bc59c8f-5cfc-4b84-9560-1de53ffc019e\") " pod="kube-system/kindnet-fnspp"
	Dec 17 11:53:59 no-preload-737478 kubelet[2228]: E1217 11:53:59.233247    2228 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-737478" containerName="kube-apiserver"
	Dec 17 11:53:59 no-preload-737478 kubelet[2228]: I1217 11:53:59.297267    2228 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-5tkm8" podStartSLOduration=3.297244081 podStartE2EDuration="3.297244081s" podCreationTimestamp="2025-12-17 11:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:53:57.004407936 +0000 UTC m=+6.173828128" watchObservedRunningTime="2025-12-17 11:53:59.297244081 +0000 UTC m=+8.466664304"
	Dec 17 11:54:00 no-preload-737478 kubelet[2228]: E1217 11:54:00.071107    2228 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-737478" containerName="kube-scheduler"
	Dec 17 11:54:01 no-preload-737478 kubelet[2228]: I1217 11:54:01.027693    2228 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-fnspp" podStartSLOduration=0.60676775 podStartE2EDuration="5.027669941s" podCreationTimestamp="2025-12-17 11:53:56 +0000 UTC" firstStartedPulling="2025-12-17 11:53:56.36827827 +0000 UTC m=+5.537698541" lastFinishedPulling="2025-12-17 11:54:00.789180552 +0000 UTC m=+9.958600732" observedRunningTime="2025-12-17 11:54:01.027420489 +0000 UTC m=+10.196840680" watchObservedRunningTime="2025-12-17 11:54:01.027669941 +0000 UTC m=+10.197090131"
	Dec 17 11:54:01 no-preload-737478 kubelet[2228]: E1217 11:54:01.095687    2228 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-737478" containerName="kube-controller-manager"
	Dec 17 11:54:02 no-preload-737478 kubelet[2228]: E1217 11:54:02.565791    2228 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-737478" containerName="etcd"
	Dec 17 11:54:09 no-preload-737478 kubelet[2228]: E1217 11:54:09.241652    2228 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-737478" containerName="kube-apiserver"
	Dec 17 11:54:10 no-preload-737478 kubelet[2228]: E1217 11:54:10.077024    2228 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-737478" containerName="kube-scheduler"
	Dec 17 11:54:11 no-preload-737478 kubelet[2228]: E1217 11:54:11.101159    2228 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-737478" containerName="kube-controller-manager"
	Dec 17 11:54:11 no-preload-737478 kubelet[2228]: I1217 11:54:11.570044    2228 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 17 11:54:11 no-preload-737478 kubelet[2228]: I1217 11:54:11.679311    2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8xtc\" (UniqueName: \"kubernetes.io/projected/4f523a12-a03c-4a2e-8e89-0c9d3b51612a-kube-api-access-h8xtc\") pod \"coredns-7d764666f9-n2kvr\" (UID: \"4f523a12-a03c-4a2e-8e89-0c9d3b51612a\") " pod="kube-system/coredns-7d764666f9-n2kvr"
	Dec 17 11:54:11 no-preload-737478 kubelet[2228]: I1217 11:54:11.679352    2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl4d7\" (UniqueName: \"kubernetes.io/projected/ed148111-0f36-4bd0-be78-0f5941b514ee-kube-api-access-zl4d7\") pod \"storage-provisioner\" (UID: \"ed148111-0f36-4bd0-be78-0f5941b514ee\") " pod="kube-system/storage-provisioner"
	Dec 17 11:54:11 no-preload-737478 kubelet[2228]: I1217 11:54:11.679383    2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f523a12-a03c-4a2e-8e89-0c9d3b51612a-config-volume\") pod \"coredns-7d764666f9-n2kvr\" (UID: \"4f523a12-a03c-4a2e-8e89-0c9d3b51612a\") " pod="kube-system/coredns-7d764666f9-n2kvr"
	Dec 17 11:54:11 no-preload-737478 kubelet[2228]: I1217 11:54:11.679407    2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ed148111-0f36-4bd0-be78-0f5941b514ee-tmp\") pod \"storage-provisioner\" (UID: \"ed148111-0f36-4bd0-be78-0f5941b514ee\") " pod="kube-system/storage-provisioner"
	Dec 17 11:54:12 no-preload-737478 kubelet[2228]: E1217 11:54:12.037993    2228 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n2kvr" containerName="coredns"
	Dec 17 11:54:12 no-preload-737478 kubelet[2228]: I1217 11:54:12.046353    2228 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.046334977 podStartE2EDuration="16.046334977s" podCreationTimestamp="2025-12-17 11:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:12.046283152 +0000 UTC m=+21.215703341" watchObservedRunningTime="2025-12-17 11:54:12.046334977 +0000 UTC m=+21.215755149"
	Dec 17 11:54:13 no-preload-737478 kubelet[2228]: E1217 11:54:13.040454    2228 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n2kvr" containerName="coredns"
	Dec 17 11:54:14 no-preload-737478 kubelet[2228]: E1217 11:54:14.043014    2228 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n2kvr" containerName="coredns"
	Dec 17 11:54:14 no-preload-737478 kubelet[2228]: I1217 11:54:14.150184    2228 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-n2kvr" podStartSLOduration=18.150162531 podStartE2EDuration="18.150162531s" podCreationTimestamp="2025-12-17 11:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:12.072798304 +0000 UTC m=+21.242218494" watchObservedRunningTime="2025-12-17 11:54:14.150162531 +0000 UTC m=+23.319582719"
	Dec 17 11:54:14 no-preload-737478 kubelet[2228]: I1217 11:54:14.192978    2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwb87\" (UniqueName: \"kubernetes.io/projected/5812a4e7-2a1f-4e57-a29f-bf4c78d30ffd-kube-api-access-mwb87\") pod \"busybox\" (UID: \"5812a4e7-2a1f-4e57-a29f-bf4c78d30ffd\") " pod="default/busybox"
	
	
	==> storage-provisioner [32ee50996e202150378582c4d330e089cab3031ef740ac41472e640a7301945d] <==
	I1217 11:54:11.994438       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:54:12.005238       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:54:12.005489       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 11:54:12.008789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:12.019249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:54:12.019479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:54:12.019706       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-737478_9118b31e-d086-49f7-a342-ce8fca31b78c!
	I1217 11:54:12.020135       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c69f3844-a665-403c-a70c-0a1934605a75", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-737478_9118b31e-d086-49f7-a342-ce8fca31b78c became leader
	W1217 11:54:12.025604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:12.029710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:54:12.120126       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-737478_9118b31e-d086-49f7-a342-ce8fca31b78c!
	W1217 11:54:14.033763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:14.040439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:16.044606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:16.048913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:18.052931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:18.058658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:20.064334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:20.070472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:22.074886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:22.079368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:24.083249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:24.088201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737478 -n no-preload-737478
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-737478 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (273.358157ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:54:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-601829
helpers_test.go:244: (dbg) docker inspect newest-cni-601829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e",
	        "Created": "2025-12-17T11:54:07.887432598Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1953923,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:54:07.933419246Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/hosts",
	        "LogPath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e-json.log",
	        "Name": "/newest-cni-601829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-601829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-601829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e",
	                "LowerDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-601829",
	                "Source": "/var/lib/docker/volumes/newest-cni-601829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-601829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-601829",
	                "name.minikube.sigs.k8s.io": "newest-cni-601829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5f51539f30776794eb9a89941672f60e9661830a630f91beb3667d3880dde5d0",
	            "SandboxKey": "/var/run/docker/netns/5f51539f3077",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34616"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34617"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34618"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34619"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-601829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba75054aca4fb8ab88e7766d66917111b8a98c9b6621d8d4536b729c295e0bd7",
	                    "EndpointID": "584a4af5c75e464509dd5d2387f669ec6f9119e76b8b610eceaf446729534f96",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ea:6e:93:8f:d5:af",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-601829",
	                        "0771ab9e37be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-601829 -n newest-cni-601829
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-601829 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-601829 logs -n 25: (1.111654549s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ ssh     │ -p cert-options-714247 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                      │ cert-options-714247          │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ delete  │ -p cert-options-714247                                                                                                                                                                                                                             │ cert-options-714247          │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ start   │ -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:51 UTC │ 17 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-401285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │                     │
	│ stop    │ -p old-k8s-version-401285 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-401285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:52 UTC │
	│ start   │ -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p cert-expiration-067996 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                          │ cert-expiration-067996       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p cert-expiration-067996                                                                                                                                                                                                                          │ cert-expiration-067996       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ image   │ old-k8s-version-401285 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ pause   │ -p old-k8s-version-401285 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p kubernetes-upgrade-556754                                                                                                                                                                                                                       │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p disable-driver-mounts-618082                                                                                                                                                                                                                    │ disable-driver-mounts-618082 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ delete  │ -p stopped-upgrade-287611                                                                                                                                                                                                                          │ stopped-upgrade-287611       │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p no-preload-737478 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:54:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:54:03.244695 1952673 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:54:03.244946 1952673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:03.244954 1952673 out.go:374] Setting ErrFile to fd 2...
	I1217 11:54:03.244959 1952673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:03.245146 1952673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:54:03.245673 1952673 out.go:368] Setting JSON to false
	I1217 11:54:03.246939 1952673 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20188,"bootTime":1765952255,"procs":429,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:54:03.247003 1952673 start.go:143] virtualization: kvm guest
	I1217 11:54:03.249619 1952673 out.go:179] * [newest-cni-601829] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:54:03.250900 1952673 notify.go:221] Checking for updates...
	I1217 11:54:03.250925 1952673 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:54:03.252237 1952673 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:54:03.254135 1952673 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:03.257960 1952673 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:54:03.259282 1952673 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:54:03.260497 1952673 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:54:03.262213 1952673 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:54:03.262371 1952673 config.go:182] Loaded profile config "embed-certs-542273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:54:03.262462 1952673 config.go:182] Loaded profile config "no-preload-737478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:03.262584 1952673 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:54:03.288960 1952673 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:54:03.289072 1952673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:03.350699 1952673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 11:54:03.339679509 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:03.350861 1952673 docker.go:319] overlay module found
	I1217 11:54:03.352886 1952673 out.go:179] * Using the docker driver based on user configuration
	I1217 11:54:03.354255 1952673 start.go:309] selected driver: docker
	I1217 11:54:03.354272 1952673 start.go:927] validating driver "docker" against <nil>
	I1217 11:54:03.354284 1952673 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:54:03.354866 1952673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:03.418358 1952673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 11:54:03.407378494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:03.418793 1952673 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 11:54:03.418858 1952673 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 11:54:03.419157 1952673 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:54:03.423693 1952673 out.go:179] * Using Docker driver with root privileges
	I1217 11:54:03.425202 1952673 cni.go:84] Creating CNI manager for ""
	I1217 11:54:03.425298 1952673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:03.425311 1952673 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:54:03.425416 1952673 start.go:353] cluster config:
	{Name:newest-cni-601829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:03.427017 1952673 out.go:179] * Starting "newest-cni-601829" primary control-plane node in "newest-cni-601829" cluster
	I1217 11:54:03.428206 1952673 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:54:03.429483 1952673 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:54:03.430934 1952673 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:54:03.430977 1952673 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 11:54:03.431000 1952673 cache.go:65] Caching tarball of preloaded images
	I1217 11:54:03.431042 1952673 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:54:03.431110 1952673 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:54:03.431123 1952673 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 11:54:03.431235 1952673 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/config.json ...
	I1217 11:54:03.431266 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/config.json: {Name:mk2d370f2ff2347a1af47e8ce66acf5877fe4672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:03.456193 1952673 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:54:03.456244 1952673 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:54:03.456274 1952673 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:54:03.456318 1952673 start.go:360] acquireMachinesLock for newest-cni-601829: {Name:mk9faceab19a04d2aa54df7eaada9c8c27536be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:03.456450 1952673 start.go:364] duration metric: took 104.148µs to acquireMachinesLock for "newest-cni-601829"
	I1217 11:54:03.456487 1952673 start.go:93] Provisioning new machine with config: &{Name:newest-cni-601829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:54:03.456595 1952673 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:54:02.206623 1943967 out.go:252]   - Configuring RBAC rules ...
	I1217 11:54:02.206808 1943967 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:54:02.210930 1943967 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:54:02.218082 1943967 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:54:02.223874 1943967 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:54:02.227076 1943967 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:54:02.230464 1943967 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:54:02.567052 1943967 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:54:02.983242 1943967 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:54:03.563597 1943967 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:54:03.564512 1943967 kubeadm.go:319] 
	I1217 11:54:03.564612 1943967 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:54:03.564625 1943967 kubeadm.go:319] 
	I1217 11:54:03.564718 1943967 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:54:03.564728 1943967 kubeadm.go:319] 
	I1217 11:54:03.564758 1943967 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:54:03.564856 1943967 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:54:03.564968 1943967 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:54:03.564990 1943967 kubeadm.go:319] 
	I1217 11:54:03.565073 1943967 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:54:03.565083 1943967 kubeadm.go:319] 
	I1217 11:54:03.565148 1943967 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:54:03.565159 1943967 kubeadm.go:319] 
	I1217 11:54:03.565224 1943967 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:54:03.565327 1943967 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:54:03.565427 1943967 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:54:03.565440 1943967 kubeadm.go:319] 
	I1217 11:54:03.565574 1943967 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:54:03.565690 1943967 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:54:03.565700 1943967 kubeadm.go:319] 
	I1217 11:54:03.565827 1943967 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wvm0wt.yk7k376wwexjgpk5 \
	I1217 11:54:03.566018 1943967 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:54:03.566049 1943967 kubeadm.go:319] 	--control-plane 
	I1217 11:54:03.566053 1943967 kubeadm.go:319] 
	I1217 11:54:03.566126 1943967 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:54:03.566133 1943967 kubeadm.go:319] 
	I1217 11:54:03.566203 1943967 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wvm0wt.yk7k376wwexjgpk5 \
	I1217 11:54:03.566293 1943967 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:54:03.569103 1943967 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:54:03.569289 1943967 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:54:03.569323 1943967 cni.go:84] Creating CNI manager for ""
	I1217 11:54:03.569334 1943967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:03.571641 1943967 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1217 11:54:01.173599 1938284 node_ready.go:57] node "no-preload-737478" has "Ready":"False" status (will retry)
	W1217 11:54:03.674139 1938284 node_ready.go:57] node "no-preload-737478" has "Ready":"False" status (will retry)
	I1217 11:54:00.493039 1949672 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-382022:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.420245328s)
	I1217 11:54:00.494073 1949672 kic.go:203] duration metric: took 4.421432015s to extract preloaded images to volume ...
	W1217 11:54:00.494339 1949672 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 11:54:00.494470 1949672 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 11:54:00.494569 1949672 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:54:00.587308 1949672 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-382022 --name default-k8s-diff-port-382022 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-382022 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-382022 --network default-k8s-diff-port-382022 --ip 192.168.76.2 --volume default-k8s-diff-port-382022:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:54:01.174656 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Running}}
	I1217 11:54:01.193864 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:01.213087 1949672 cli_runner.go:164] Run: docker exec default-k8s-diff-port-382022 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:54:01.262496 1949672 oci.go:144] the created container "default-k8s-diff-port-382022" has a running status.
	I1217 11:54:01.262578 1949672 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa...
	I1217 11:54:01.400315 1949672 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 11:54:01.438617 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:01.461515 1949672 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:54:01.461569 1949672 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-382022 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:54:01.528311 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:01.551286 1949672 machine.go:94] provisionDockerMachine start ...
	I1217 11:54:01.551394 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:01.576202 1949672 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:01.576504 1949672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I1217 11:54:01.576520 1949672 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:54:01.719918 1949672 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382022
	
	I1217 11:54:01.719956 1949672 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-382022"
	I1217 11:54:01.720039 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:01.743427 1949672 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:01.743773 1949672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I1217 11:54:01.743799 1949672 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-382022 && echo "default-k8s-diff-port-382022" | sudo tee /etc/hostname
	I1217 11:54:01.897201 1949672 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382022
	
	I1217 11:54:01.897282 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:01.920720 1949672 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:01.921007 1949672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I1217 11:54:01.921043 1949672 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-382022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-382022/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-382022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:54:02.059020 1949672 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:54:02.059057 1949672 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:54:02.059085 1949672 ubuntu.go:190] setting up certificates
	I1217 11:54:02.059102 1949672 provision.go:84] configureAuth start
	I1217 11:54:02.059189 1949672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:54:02.082451 1949672 provision.go:143] copyHostCerts
	I1217 11:54:02.082521 1949672 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:54:02.082563 1949672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:54:02.082635 1949672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:54:02.082759 1949672 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:54:02.082773 1949672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:54:02.082809 1949672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:54:02.082904 1949672 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:54:02.082917 1949672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:54:02.082967 1949672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:54:02.083053 1949672 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-382022 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-382022 localhost minikube]
	I1217 11:54:02.119172 1949672 provision.go:177] copyRemoteCerts
	I1217 11:54:02.119240 1949672 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:54:02.119304 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:02.139354 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:02.239050 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 11:54:02.260116 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:54:02.279598 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 11:54:02.298374 1949672 provision.go:87] duration metric: took 239.251571ms to configureAuth
	I1217 11:54:02.298406 1949672 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:54:02.298603 1949672 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:54:02.298789 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:02.317959 1949672 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:02.318254 1949672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I1217 11:54:02.318274 1949672 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:54:02.648494 1949672 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:54:02.648524 1949672 machine.go:97] duration metric: took 1.097212077s to provisionDockerMachine
	I1217 11:54:02.648554 1949672 client.go:176] duration metric: took 7.254405796s to LocalClient.Create
	I1217 11:54:02.648579 1949672 start.go:167] duration metric: took 7.254501293s to libmachine.API.Create "default-k8s-diff-port-382022"
	I1217 11:54:02.648590 1949672 start.go:293] postStartSetup for "default-k8s-diff-port-382022" (driver="docker")
	I1217 11:54:02.648607 1949672 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:54:02.648682 1949672 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:54:02.648736 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:02.670640 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:02.775360 1949672 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:54:02.780680 1949672 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:54:02.780722 1949672 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:54:02.780738 1949672 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:54:02.780805 1949672 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:54:02.780899 1949672 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:54:02.781026 1949672 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:54:02.792231 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:02.819579 1949672 start.go:296] duration metric: took 170.967603ms for postStartSetup
	I1217 11:54:02.820010 1949672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:54:02.843246 1949672 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/config.json ...
	I1217 11:54:02.843608 1949672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:54:02.843697 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:02.873156 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:02.975092 1949672 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:54:02.981423 1949672 start.go:128] duration metric: took 7.589272064s to createHost
	I1217 11:54:02.981457 1949672 start.go:83] releasing machines lock for "default-k8s-diff-port-382022", held for 7.589490305s
	I1217 11:54:02.981561 1949672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:54:03.008307 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:54:03.008401 1949672 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:54:03.008424 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:54:03.008472 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:54:03.008510 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:54:03.008563 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:54:03.008630 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:03.008724 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:54:03.008784 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:03.030211 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:03.141004 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:54:03.161797 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:54:03.181838 1949672 ssh_runner.go:195] Run: openssl version
	I1217 11:54:03.188734 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:54:03.197800 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:54:03.206389 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:54:03.210889 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:54:03.210964 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:54:03.252036 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:54:03.261872 1949672 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1672941.pem /etc/ssl/certs/51391683.0
	I1217 11:54:03.270735 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:54:03.280714 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:54:03.290850 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:54:03.295389 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:54:03.295458 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:54:03.344285 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:54:03.354092 1949672 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16729412.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:54:03.362891 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:03.371247 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:54:03.384516 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:03.389783 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:03.389862 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:03.435806 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:54:03.446736 1949672 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:54:03.456667 1949672 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:54:03.461284 1949672 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:54:03.465385 1949672 ssh_runner.go:195] Run: cat /version.json
	I1217 11:54:03.465467 1949672 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:54:03.469963 1949672 ssh_runner.go:195] Run: systemctl --version
	I1217 11:54:03.535799 1949672 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:54:03.585558 1949672 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:54:03.591717 1949672 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:54:03.591801 1949672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:54:03.624994 1949672 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 11:54:03.625025 1949672 start.go:496] detecting cgroup driver to use...
	I1217 11:54:03.625064 1949672 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:54:03.625134 1949672 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:54:03.647522 1949672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:54:03.666024 1949672 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:54:03.666087 1949672 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:54:03.691460 1949672 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:54:03.716853 1949672 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:54:03.830163 1949672 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:54:03.965318 1949672 docker.go:234] disabling docker service ...
	I1217 11:54:03.965389 1949672 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:54:04.000039 1949672 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:54:04.018991 1949672 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:54:04.137333 1949672 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:54:04.241805 1949672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:54:04.257629 1949672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:54:04.274423 1949672 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:54:04.274514 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.288085 1949672 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:54:04.288159 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.300633 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.310695 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.321774 1949672 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:54:04.332167 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.342059 1949672 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.359057 1949672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:04.369920 1949672 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:54:04.378871 1949672 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:54:04.388126 1949672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:04.479654 1949672 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:54:05.175060 1949672 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:54:05.175133 1949672 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:54:05.179563 1949672 start.go:564] Will wait 60s for crictl version
	I1217 11:54:05.179632 1949672 ssh_runner.go:195] Run: which crictl
	I1217 11:54:05.183637 1949672 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:54:05.213404 1949672 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:54:05.213500 1949672 ssh_runner.go:195] Run: crio --version
	I1217 11:54:05.245866 1949672 ssh_runner.go:195] Run: crio --version
	I1217 11:54:05.283750 1949672 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 11:54:03.573203 1943967 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:54:03.579017 1943967 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 11:54:03.579037 1943967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:54:03.597054 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:54:03.872723 1943967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:54:03.872887 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:03.872980 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-542273 minikube.k8s.io/updated_at=2025_12_17T11_54_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=embed-certs-542273 minikube.k8s.io/primary=true
	I1217 11:54:03.891289 1943967 ops.go:34] apiserver oom_adj: -16
	I1217 11:54:03.998627 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:04.499505 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:04.998656 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:05.498670 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:05.285185 1949672 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-382022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:54:05.307642 1949672 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:54:05.312261 1949672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:54:05.323253 1949672 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:54:05.323405 1949672 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:54:05.323466 1949672 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:54:05.364791 1949672 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:54:05.364821 1949672 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:54:05.364879 1949672 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:54:05.394380 1949672 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:54:05.394408 1949672 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:54:05.394418 1949672 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.3 crio true true} ...
	I1217 11:54:05.394544 1949672 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-382022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:54:05.394637 1949672 ssh_runner.go:195] Run: crio config
	I1217 11:54:05.446258 1949672 cni.go:84] Creating CNI manager for ""
	I1217 11:54:05.446293 1949672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:05.446328 1949672 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:54:05.446366 1949672 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-382022 NodeName:default-k8s-diff-port-382022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:54:05.446575 1949672 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-382022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:54:05.446670 1949672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:54:05.455762 1949672 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:54:05.455842 1949672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:54:05.465013 1949672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 11:54:05.479958 1949672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:54:05.499965 1949672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 11:54:05.516505 1949672 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:54:05.521676 1949672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:54:05.533354 1949672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:05.626902 1949672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:54:05.667623 1949672 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022 for IP: 192.168.76.2
	I1217 11:54:05.667648 1949672 certs.go:195] generating shared ca certs ...
	I1217 11:54:05.667678 1949672 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:05.667878 1949672 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:54:05.667942 1949672 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:54:05.667958 1949672 certs.go:257] generating profile certs ...
	I1217 11:54:05.668041 1949672 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.key
	I1217 11:54:05.668063 1949672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.crt with IP's: []
	I1217 11:54:05.836493 1949672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.crt ...
	I1217 11:54:05.836521 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.crt: {Name:mk6d7fcb7a2ad0f3950b9dcf68fb09630ede687c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:05.836703 1949672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.key ...
	I1217 11:54:05.836719 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.key: {Name:mk05036f39c3e70ff9d1cd2a48d6c33d6185c94f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:05.836802 1949672 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a
	I1217 11:54:05.836818 1949672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt.e7b7ff3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 11:54:05.966442 1949672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt.e7b7ff3a ...
	I1217 11:54:05.966472 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt.e7b7ff3a: {Name:mk87ca2f10e9e49dc362b4350b3b634875eba947 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:05.966657 1949672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a ...
	I1217 11:54:05.966673 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a: {Name:mk1805b63a3f52e0c3b884bd061011d971eee143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:05.966751 1949672 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt.e7b7ff3a -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt
	I1217 11:54:05.966831 1949672 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key
	I1217 11:54:05.966887 1949672 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key
	I1217 11:54:05.966905 1949672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt with IP's: []
	I1217 11:54:06.065081 1949672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt ...
	I1217 11:54:06.065106 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt: {Name:mkf1d916f1ba98c0e284ef3c153c52de42ea1866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:06.065288 1949672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key ...
	I1217 11:54:06.065310 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key: {Name:mka65834a6bf35447449dadcc877f37e4dc848f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:06.065579 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:54:06.065638 1949672 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:54:06.065649 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:54:06.065683 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:54:06.065712 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:54:06.065742 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:54:06.065797 1949672 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:06.066594 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:54:06.088950 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:54:06.109101 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:54:06.129213 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:54:06.150168 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 11:54:06.170860 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:54:06.191954 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:54:06.213109 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 11:54:06.234297 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:54:06.255826 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:54:06.275863 1949672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:54:06.295791 1949672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:54:06.309480 1949672 ssh_runner.go:195] Run: openssl version
	I1217 11:54:06.316899 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:54:06.325317 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:54:06.333385 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:54:06.337481 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:54:06.337554 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:54:06.375074 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:54:06.384040 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:06.393933 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:54:06.404378 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:06.409030 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:06.409094 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:06.448446 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:54:06.458647 1949672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:54:06.468044 1949672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:54:06.481939 1949672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:54:06.486523 1949672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:54:06.486673 1949672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:54:06.535701 1949672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:54:06.546247 1949672 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:54:06.550918 1949672 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:54:06.550982 1949672 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:06.551089 1949672 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:54:06.551148 1949672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:54:06.586826 1949672 cri.go:89] found id: ""
	I1217 11:54:06.586904 1949672 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:54:06.597284 1949672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:54:06.607377 1949672 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:54:06.607457 1949672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:54:06.616031 1949672 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:54:06.616060 1949672 kubeadm.go:158] found existing configuration files:
	
	I1217 11:54:06.616110 1949672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1217 11:54:06.624903 1949672 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:54:06.624962 1949672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:54:06.633052 1949672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1217 11:54:06.641516 1949672 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:54:06.641595 1949672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:54:06.649603 1949672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1217 11:54:06.657882 1949672 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:54:06.657946 1949672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:54:06.665836 1949672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1217 11:54:06.674140 1949672 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:54:06.674193 1949672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:54:06.681865 1949672 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:54:06.722697 1949672 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 11:54:06.722786 1949672 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:54:06.745112 1949672 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:54:06.745212 1949672 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:54:06.745256 1949672 kubeadm.go:319] OS: Linux
	I1217 11:54:06.745310 1949672 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:54:06.745366 1949672 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:54:06.745427 1949672 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:54:06.745506 1949672 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:54:06.745590 1949672 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:54:06.745647 1949672 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:54:06.745705 1949672 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:54:06.745757 1949672 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 11:54:06.808499 1949672 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:54:06.808685 1949672 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:54:06.808812 1949672 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:54:06.817180 1949672 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:54:03.458943 1952673 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:54:03.459234 1952673 start.go:159] libmachine.API.Create for "newest-cni-601829" (driver="docker")
	I1217 11:54:03.459273 1952673 client.go:173] LocalClient.Create starting
	I1217 11:54:03.459348 1952673 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem
	I1217 11:54:03.459395 1952673 main.go:143] libmachine: Decoding PEM data...
	I1217 11:54:03.459430 1952673 main.go:143] libmachine: Parsing certificate...
	I1217 11:54:03.459514 1952673 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem
	I1217 11:54:03.459573 1952673 main.go:143] libmachine: Decoding PEM data...
	I1217 11:54:03.459590 1952673 main.go:143] libmachine: Parsing certificate...
	I1217 11:54:03.460030 1952673 cli_runner.go:164] Run: docker network inspect newest-cni-601829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:54:03.479513 1952673 cli_runner.go:211] docker network inspect newest-cni-601829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:54:03.479634 1952673 network_create.go:284] running [docker network inspect newest-cni-601829] to gather additional debugging logs...
	I1217 11:54:03.479675 1952673 cli_runner.go:164] Run: docker network inspect newest-cni-601829
	W1217 11:54:03.502906 1952673 cli_runner.go:211] docker network inspect newest-cni-601829 returned with exit code 1
	I1217 11:54:03.502936 1952673 network_create.go:287] error running [docker network inspect newest-cni-601829]: docker network inspect newest-cni-601829: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-601829 not found
	I1217 11:54:03.502950 1952673 network_create.go:289] output of [docker network inspect newest-cni-601829]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-601829 not found
	
	** /stderr **
	I1217 11:54:03.503091 1952673 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:54:03.523660 1952673 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3d92c06bf7e1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:dc:f5:1a:95:c6} reservation:<nil>}
	I1217 11:54:03.524406 1952673 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e34a3db6b97 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:b3:69:9a:9a:9f} reservation:<nil>}
	I1217 11:54:03.525252 1952673 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d8460370d724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:bb:68:9a:9d:ac} reservation:<nil>}
	I1217 11:54:03.525986 1952673 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-009b4cca67d1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:77:e4:db:4d:bd} reservation:<nil>}
	I1217 11:54:03.526880 1952673 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020677c0}
	I1217 11:54:03.526904 1952673 network_create.go:124] attempt to create docker network newest-cni-601829 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 11:54:03.526950 1952673 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-601829 newest-cni-601829
	I1217 11:54:03.585054 1952673 network_create.go:108] docker network newest-cni-601829 192.168.85.0/24 created
	I1217 11:54:03.585095 1952673 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-601829" container
	I1217 11:54:03.585178 1952673 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:54:03.608125 1952673 cli_runner.go:164] Run: docker volume create newest-cni-601829 --label name.minikube.sigs.k8s.io=newest-cni-601829 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:54:03.631958 1952673 oci.go:103] Successfully created a docker volume newest-cni-601829
	I1217 11:54:03.632062 1952673 cli_runner.go:164] Run: docker run --rm --name newest-cni-601829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-601829 --entrypoint /usr/bin/test -v newest-cni-601829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:54:04.135768 1952673 oci.go:107] Successfully prepared a docker volume newest-cni-601829
	I1217 11:54:04.135854 1952673 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:54:04.135872 1952673 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:54:04.135939 1952673 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-601829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:54:07.807238 1952673 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-601829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.67122886s)
	I1217 11:54:07.807280 1952673 kic.go:203] duration metric: took 3.671401541s to extract preloaded images to volume ...
	W1217 11:54:07.807397 1952673 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 11:54:07.807472 1952673 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 11:54:07.807525 1952673 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:54:07.869024 1952673 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-601829 --name newest-cni-601829 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-601829 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-601829 --network newest-cni-601829 --ip 192.168.85.2 --volume newest-cni-601829:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:54:08.203357 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Running}}
	I1217 11:54:08.232006 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:05.999667 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:06.499280 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:06.998715 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:07.499173 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:07.998683 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:08.499367 1943967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:08.611919 1943967 kubeadm.go:1114] duration metric: took 4.739071892s to wait for elevateKubeSystemPrivileges
	I1217 11:54:08.611960 1943967 kubeadm.go:403] duration metric: took 17.38019056s to StartCluster
	I1217 11:54:08.611983 1943967 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:08.612055 1943967 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:08.614611 1943967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:08.615294 1943967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:54:08.615384 1943967 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:54:08.615413 1943967 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:54:08.616522 1943967 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-542273"
	I1217 11:54:08.616579 1943967 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-542273"
	I1217 11:54:08.616757 1943967 host.go:66] Checking if "embed-certs-542273" exists ...
	I1217 11:54:08.616587 1943967 addons.go:70] Setting default-storageclass=true in profile "embed-certs-542273"
	I1217 11:54:08.616838 1943967 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-542273"
	I1217 11:54:08.616663 1943967 config.go:182] Loaded profile config "embed-certs-542273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:54:08.617466 1943967 cli_runner.go:164] Run: docker container inspect embed-certs-542273 --format={{.State.Status}}
	I1217 11:54:08.617515 1943967 cli_runner.go:164] Run: docker container inspect embed-certs-542273 --format={{.State.Status}}
	I1217 11:54:08.619593 1943967 out.go:179] * Verifying Kubernetes components...
	I1217 11:54:08.624886 1943967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:08.656574 1943967 addons.go:239] Setting addon default-storageclass=true in "embed-certs-542273"
	I1217 11:54:08.656739 1943967 host.go:66] Checking if "embed-certs-542273" exists ...
	I1217 11:54:08.657484 1943967 cli_runner.go:164] Run: docker container inspect embed-certs-542273 --format={{.State.Status}}
	I1217 11:54:08.665416 1943967 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:54:08.666939 1943967 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:54:08.666963 1943967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:54:08.667035 1943967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542273
	I1217 11:54:08.702919 1943967 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:54:08.702952 1943967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:54:08.703127 1943967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542273
	I1217 11:54:08.707477 1943967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34606 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/embed-certs-542273/id_rsa Username:docker}
	I1217 11:54:08.734734 1943967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34606 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/embed-certs-542273/id_rsa Username:docker}
	I1217 11:54:08.811622 1943967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:54:08.850787 1943967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:54:08.865097 1943967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:54:08.880382 1943967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:54:09.034420 1943967 start.go:1013] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 11:54:09.035503 1943967 node_ready.go:35] waiting up to 6m0s for node "embed-certs-542273" to be "Ready" ...
	I1217 11:54:09.245302 1943967 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 11:54:06.173650 1938284 node_ready.go:57] node "no-preload-737478" has "Ready":"False" status (will retry)
	W1217 11:54:08.677174 1938284 node_ready.go:57] node "no-preload-737478" has "Ready":"False" status (will retry)
	I1217 11:54:06.918309 1949672 out.go:252]   - Generating certificates and keys ...
	I1217 11:54:06.918424 1949672 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:54:06.918524 1949672 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:54:06.918647 1949672 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:54:07.073056 1949672 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:54:07.251338 1949672 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:54:07.356906 1949672 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:54:07.408562 1949672 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:54:07.408768 1949672 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-382022 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:54:07.519138 1949672 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:54:07.519489 1949672 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-382022 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:54:07.641980 1949672 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:54:07.914870 1949672 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:54:07.976155 1949672 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:54:07.976308 1949672 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:54:08.481301 1949672 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:54:08.662795 1949672 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:54:08.823186 1949672 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:54:09.305092 1949672 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:54:09.513318 1949672 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:54:09.513888 1949672 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:54:09.517635 1949672 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:54:09.519187 1949672 out.go:252]   - Booting up control plane ...
	I1217 11:54:09.519332 1949672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:54:09.519446 1949672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:54:09.519970 1949672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:54:09.553085 1949672 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:54:09.553207 1949672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:54:09.561418 1949672 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:54:09.561661 1949672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:54:09.561750 1949672 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:54:09.680741 1949672 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:54:09.680916 1949672 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:54:09.246521 1943967 addons.go:530] duration metric: took 631.103441ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 11:54:09.538406 1943967 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-542273" context rescaled to 1 replicas
	I1217 11:54:08.261275 1952673 cli_runner.go:164] Run: docker exec newest-cni-601829 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:54:08.311425 1952673 oci.go:144] the created container "newest-cni-601829" has a running status.
	I1217 11:54:08.311462 1952673 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa...
	I1217 11:54:08.398560 1952673 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 11:54:08.434179 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:08.456022 1952673 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:54:08.456043 1952673 kic_runner.go:114] Args: [docker exec --privileged newest-cni-601829 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:54:08.513047 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:08.540098 1952673 machine.go:94] provisionDockerMachine start ...
	I1217 11:54:08.540203 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:08.567108 1952673 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:08.567642 1952673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34616 <nil> <nil>}
	I1217 11:54:08.567718 1952673 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:54:08.568606 1952673 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34910->127.0.0.1:34616: read: connection reset by peer
	I1217 11:54:11.709796 1952673 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-601829
	
	I1217 11:54:11.709846 1952673 ubuntu.go:182] provisioning hostname "newest-cni-601829"
	I1217 11:54:11.709923 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:11.728488 1952673 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:11.728726 1952673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34616 <nil> <nil>}
	I1217 11:54:11.728742 1952673 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-601829 && echo "newest-cni-601829" | sudo tee /etc/hostname
	I1217 11:54:11.876758 1952673 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-601829
	
	I1217 11:54:11.876841 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:11.896951 1952673 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:11.897243 1952673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34616 <nil> <nil>}
	I1217 11:54:11.897262 1952673 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-601829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-601829/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-601829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:54:12.054909 1952673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:54:12.054940 1952673 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:54:12.054985 1952673 ubuntu.go:190] setting up certificates
	I1217 11:54:12.055006 1952673 provision.go:84] configureAuth start
	I1217 11:54:12.055078 1952673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-601829
	I1217 11:54:12.082522 1952673 provision.go:143] copyHostCerts
	I1217 11:54:12.082632 1952673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:54:12.082664 1952673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:54:12.082752 1952673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:54:12.082895 1952673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:54:12.082911 1952673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:54:12.082957 1952673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:54:12.083116 1952673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:54:12.083134 1952673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:54:12.083175 1952673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:54:12.083294 1952673 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.newest-cni-601829 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-601829]
	I1217 11:54:12.116241 1952673 provision.go:177] copyRemoteCerts
	I1217 11:54:12.116317 1952673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:54:12.116413 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:12.143803 1952673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:12.250113 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:54:12.272303 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:54:12.293840 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:54:12.320738 1952673 provision.go:87] duration metric: took 265.71885ms to configureAuth
	I1217 11:54:12.320768 1952673 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:54:12.320994 1952673 config.go:182] Loaded profile config "newest-cni-601829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:12.321118 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:12.345826 1952673 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:12.346154 1952673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34616 <nil> <nil>}
	I1217 11:54:12.346177 1952673 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:54:12.646431 1952673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:54:12.646457 1952673 machine.go:97] duration metric: took 4.106334409s to provisionDockerMachine
	I1217 11:54:12.646472 1952673 client.go:176] duration metric: took 9.187188683s to LocalClient.Create
	I1217 11:54:12.646493 1952673 start.go:167] duration metric: took 9.187260602s to libmachine.API.Create "newest-cni-601829"
	I1217 11:54:12.646501 1952673 start.go:293] postStartSetup for "newest-cni-601829" (driver="docker")
	I1217 11:54:12.646517 1952673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:54:12.646599 1952673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:54:12.646654 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:12.667021 1952673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:12.764462 1952673 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:54:12.768266 1952673 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:54:12.768300 1952673 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:54:12.768313 1952673 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:54:12.768374 1952673 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:54:12.768499 1952673 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:54:12.768663 1952673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:54:12.777157 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:12.798867 1952673 start.go:296] duration metric: took 152.346268ms for postStartSetup
	I1217 11:54:12.799242 1952673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-601829
	I1217 11:54:12.821328 1952673 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/config.json ...
	I1217 11:54:12.821714 1952673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:54:12.821775 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:12.843249 1952673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:12.937731 1952673 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:54:12.943151 1952673 start.go:128] duration metric: took 9.48653629s to createHost
	I1217 11:54:12.943183 1952673 start.go:83] releasing machines lock for "newest-cni-601829", held for 9.486713472s
	I1217 11:54:12.943262 1952673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-601829
	I1217 11:54:12.962925 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:54:12.962980 1952673 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:54:12.962992 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:54:12.963028 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:54:12.963067 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:54:12.963100 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:54:12.963162 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:12.963246 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:54:12.963317 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:12.983016 1952673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:13.122072 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:54:13.142675 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:54:13.160748 1952673 ssh_runner.go:195] Run: openssl version
	I1217 11:54:13.166971 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:54:13.174990 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:54:13.189067 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:54:13.193888 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:54:13.193961 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:54:13.230113 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:54:13.238740 1952673 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16729412.pem /etc/ssl/certs/3ec20f2e.0
	W1217 11:54:11.173180 1938284 node_ready.go:57] node "no-preload-737478" has "Ready":"False" status (will retry)
	I1217 11:54:11.672503 1938284 node_ready.go:49] node "no-preload-737478" is "Ready"
	I1217 11:54:11.672612 1938284 node_ready.go:38] duration metric: took 15.00348409s for node "no-preload-737478" to be "Ready" ...
	I1217 11:54:11.672640 1938284 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:54:11.672697 1938284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:54:11.686423 1938284 api_server.go:72] duration metric: took 15.427831795s to wait for apiserver process to appear ...
	I1217 11:54:11.686450 1938284 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:54:11.686472 1938284 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 11:54:11.692883 1938284 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 11:54:11.694000 1938284 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 11:54:11.694026 1938284 api_server.go:131] duration metric: took 7.568574ms to wait for apiserver health ...
	I1217 11:54:11.694036 1938284 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:54:11.697666 1938284 system_pods.go:59] 8 kube-system pods found
	I1217 11:54:11.697701 1938284 system_pods.go:61] "coredns-7d764666f9-n2kvr" [4f523a12-a03c-4a2e-8e89-0c9d3b51612a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:11.697708 1938284 system_pods.go:61] "etcd-no-preload-737478" [4a83714f-f222-4275-ac67-705d09e6bfc8] Running
	I1217 11:54:11.697721 1938284 system_pods.go:61] "kindnet-fnspp" [4bc59c8f-5cfc-4b84-9560-1de53ffc019e] Running
	I1217 11:54:11.697726 1938284 system_pods.go:61] "kube-apiserver-no-preload-737478" [68885014-38ff-4331-95ea-e8a51a288257] Running
	I1217 11:54:11.697732 1938284 system_pods.go:61] "kube-controller-manager-no-preload-737478" [b4af277c-d573-431d-b6ba-b32bdbdfedc1] Running
	I1217 11:54:11.697737 1938284 system_pods.go:61] "kube-proxy-5tkm8" [d1e1a3b6-95ce-43ee-a816-317f34952c21] Running
	I1217 11:54:11.697742 1938284 system_pods.go:61] "kube-scheduler-no-preload-737478" [ad12c4c1-8a58-4099-9d4c-37b39bd060ef] Running
	I1217 11:54:11.697761 1938284 system_pods.go:61] "storage-provisioner" [ed148111-0f36-4bd0-be78-0f5941b514ee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:11.697773 1938284 system_pods.go:74] duration metric: took 3.728564ms to wait for pod list to return data ...
	I1217 11:54:11.697786 1938284 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:54:11.700360 1938284 default_sa.go:45] found service account: "default"
	I1217 11:54:11.700380 1938284 default_sa.go:55] duration metric: took 2.58861ms for default service account to be created ...
	I1217 11:54:11.700388 1938284 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:54:11.703340 1938284 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:11.703369 1938284 system_pods.go:89] "coredns-7d764666f9-n2kvr" [4f523a12-a03c-4a2e-8e89-0c9d3b51612a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:11.703375 1938284 system_pods.go:89] "etcd-no-preload-737478" [4a83714f-f222-4275-ac67-705d09e6bfc8] Running
	I1217 11:54:11.703382 1938284 system_pods.go:89] "kindnet-fnspp" [4bc59c8f-5cfc-4b84-9560-1de53ffc019e] Running
	I1217 11:54:11.703385 1938284 system_pods.go:89] "kube-apiserver-no-preload-737478" [68885014-38ff-4331-95ea-e8a51a288257] Running
	I1217 11:54:11.703389 1938284 system_pods.go:89] "kube-controller-manager-no-preload-737478" [b4af277c-d573-431d-b6ba-b32bdbdfedc1] Running
	I1217 11:54:11.703393 1938284 system_pods.go:89] "kube-proxy-5tkm8" [d1e1a3b6-95ce-43ee-a816-317f34952c21] Running
	I1217 11:54:11.703398 1938284 system_pods.go:89] "kube-scheduler-no-preload-737478" [ad12c4c1-8a58-4099-9d4c-37b39bd060ef] Running
	I1217 11:54:11.703417 1938284 system_pods.go:89] "storage-provisioner" [ed148111-0f36-4bd0-be78-0f5941b514ee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:11.703462 1938284 retry.go:31] will retry after 238.206881ms: missing components: kube-dns
	I1217 11:54:11.947957 1938284 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:11.948003 1938284 system_pods.go:89] "coredns-7d764666f9-n2kvr" [4f523a12-a03c-4a2e-8e89-0c9d3b51612a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:11.948013 1938284 system_pods.go:89] "etcd-no-preload-737478" [4a83714f-f222-4275-ac67-705d09e6bfc8] Running
	I1217 11:54:11.948021 1938284 system_pods.go:89] "kindnet-fnspp" [4bc59c8f-5cfc-4b84-9560-1de53ffc019e] Running
	I1217 11:54:11.948036 1938284 system_pods.go:89] "kube-apiserver-no-preload-737478" [68885014-38ff-4331-95ea-e8a51a288257] Running
	I1217 11:54:11.948042 1938284 system_pods.go:89] "kube-controller-manager-no-preload-737478" [b4af277c-d573-431d-b6ba-b32bdbdfedc1] Running
	I1217 11:54:11.948047 1938284 system_pods.go:89] "kube-proxy-5tkm8" [d1e1a3b6-95ce-43ee-a816-317f34952c21] Running
	I1217 11:54:11.948052 1938284 system_pods.go:89] "kube-scheduler-no-preload-737478" [ad12c4c1-8a58-4099-9d4c-37b39bd060ef] Running
	I1217 11:54:11.948059 1938284 system_pods.go:89] "storage-provisioner" [ed148111-0f36-4bd0-be78-0f5941b514ee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:11.948086 1938284 retry.go:31] will retry after 336.185113ms: missing components: kube-dns
	I1217 11:54:12.289119 1938284 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:12.289156 1938284 system_pods.go:89] "coredns-7d764666f9-n2kvr" [4f523a12-a03c-4a2e-8e89-0c9d3b51612a] Running
	I1217 11:54:12.289164 1938284 system_pods.go:89] "etcd-no-preload-737478" [4a83714f-f222-4275-ac67-705d09e6bfc8] Running
	I1217 11:54:12.289169 1938284 system_pods.go:89] "kindnet-fnspp" [4bc59c8f-5cfc-4b84-9560-1de53ffc019e] Running
	I1217 11:54:12.289175 1938284 system_pods.go:89] "kube-apiserver-no-preload-737478" [68885014-38ff-4331-95ea-e8a51a288257] Running
	I1217 11:54:12.289181 1938284 system_pods.go:89] "kube-controller-manager-no-preload-737478" [b4af277c-d573-431d-b6ba-b32bdbdfedc1] Running
	I1217 11:54:12.289186 1938284 system_pods.go:89] "kube-proxy-5tkm8" [d1e1a3b6-95ce-43ee-a816-317f34952c21] Running
	I1217 11:54:12.289192 1938284 system_pods.go:89] "kube-scheduler-no-preload-737478" [ad12c4c1-8a58-4099-9d4c-37b39bd060ef] Running
	I1217 11:54:12.289197 1938284 system_pods.go:89] "storage-provisioner" [ed148111-0f36-4bd0-be78-0f5941b514ee] Running
	I1217 11:54:12.289208 1938284 system_pods.go:126] duration metric: took 588.813191ms to wait for k8s-apps to be running ...
	I1217 11:54:12.289222 1938284 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:54:12.289271 1938284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:54:12.305816 1938284 system_svc.go:56] duration metric: took 16.582374ms WaitForService to wait for kubelet
	I1217 11:54:12.305853 1938284 kubeadm.go:587] duration metric: took 16.047266093s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:54:12.305877 1938284 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:54:12.309808 1938284 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:54:12.309845 1938284 node_conditions.go:123] node cpu capacity is 8
	I1217 11:54:12.309872 1938284 node_conditions.go:105] duration metric: took 3.989067ms to run NodePressure ...
	I1217 11:54:12.309890 1938284 start.go:242] waiting for startup goroutines ...
	I1217 11:54:12.309905 1938284 start.go:247] waiting for cluster config update ...
	I1217 11:54:12.309920 1938284 start.go:256] writing updated cluster config ...
	I1217 11:54:12.310266 1938284 ssh_runner.go:195] Run: rm -f paused
	I1217 11:54:12.315325 1938284 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:54:12.319964 1938284 pod_ready.go:83] waiting for pod "coredns-7d764666f9-n2kvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.325049 1938284 pod_ready.go:94] pod "coredns-7d764666f9-n2kvr" is "Ready"
	I1217 11:54:12.325078 1938284 pod_ready.go:86] duration metric: took 5.090952ms for pod "coredns-7d764666f9-n2kvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.327456 1938284 pod_ready.go:83] waiting for pod "etcd-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.332285 1938284 pod_ready.go:94] pod "etcd-no-preload-737478" is "Ready"
	I1217 11:54:12.332311 1938284 pod_ready.go:86] duration metric: took 4.824504ms for pod "etcd-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.334673 1938284 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.339229 1938284 pod_ready.go:94] pod "kube-apiserver-no-preload-737478" is "Ready"
	I1217 11:54:12.339255 1938284 pod_ready.go:86] duration metric: took 4.556242ms for pod "kube-apiserver-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.341989 1938284 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.720803 1938284 pod_ready.go:94] pod "kube-controller-manager-no-preload-737478" is "Ready"
	I1217 11:54:12.720842 1938284 pod_ready.go:86] duration metric: took 378.825955ms for pod "kube-controller-manager-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:12.920663 1938284 pod_ready.go:83] waiting for pod "kube-proxy-5tkm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:13.320359 1938284 pod_ready.go:94] pod "kube-proxy-5tkm8" is "Ready"
	I1217 11:54:13.320394 1938284 pod_ready.go:86] duration metric: took 399.697758ms for pod "kube-proxy-5tkm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:13.520652 1938284 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:13.919862 1938284 pod_ready.go:94] pod "kube-scheduler-no-preload-737478" is "Ready"
	I1217 11:54:13.919890 1938284 pod_ready.go:86] duration metric: took 399.210577ms for pod "kube-scheduler-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:13.919902 1938284 pod_ready.go:40] duration metric: took 1.60454372s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:54:13.978651 1938284 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 11:54:13.981553 1938284 out.go:179] * Done! kubectl is now configured to use "no-preload-737478" cluster and "default" namespace by default
	I1217 11:54:13.247996 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:13.256422 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:54:13.264316 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:13.268128 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:13.268203 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:13.315817 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:54:13.326466 1952673 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:54:13.336552 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:54:13.346156 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:54:13.355310 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:54:13.359830 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:54:13.359905 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:54:13.396396 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:54:13.405066 1952673 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1672941.pem /etc/ssl/certs/51391683.0
	I1217 11:54:13.413671 1952673 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:54:13.417627 1952673 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:54:13.421749 1952673 ssh_runner.go:195] Run: cat /version.json
	I1217 11:54:13.421825 1952673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:54:13.425903 1952673 ssh_runner.go:195] Run: systemctl --version
	I1217 11:54:13.481905 1952673 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:54:13.519576 1952673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:54:13.524797 1952673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:54:13.524870 1952673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:54:13.552127 1952673 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 11:54:13.552154 1952673 start.go:496] detecting cgroup driver to use...
	I1217 11:54:13.552188 1952673 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:54:13.552229 1952673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:54:13.568824 1952673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:54:13.582024 1952673 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:54:13.582074 1952673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:54:13.600261 1952673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:54:13.618695 1952673 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:54:13.713205 1952673 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:54:13.806289 1952673 docker.go:234] disabling docker service ...
	I1217 11:54:13.806350 1952673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:54:13.825391 1952673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:54:13.839145 1952673 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:54:13.929113 1952673 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:54:14.022583 1952673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:54:14.036976 1952673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:54:14.053712 1952673 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:54:14.053781 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.065380 1952673 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:54:14.065452 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.076430 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.088712 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.101299 1952673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:54:14.112583 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.124279 1952673 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.142846 1952673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:54:14.155367 1952673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:54:14.166460 1952673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:54:14.175467 1952673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:14.272925 1952673 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:54:14.445955 1952673 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:54:14.446026 1952673 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:54:14.451129 1952673 start.go:564] Will wait 60s for crictl version
	I1217 11:54:14.451199 1952673 ssh_runner.go:195] Run: which crictl
	I1217 11:54:14.455481 1952673 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:54:14.489601 1952673 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:54:14.489698 1952673 ssh_runner.go:195] Run: crio --version
	I1217 11:54:14.526616 1952673 ssh_runner.go:195] Run: crio --version
	I1217 11:54:14.563866 1952673 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 11:54:14.565422 1952673 cli_runner.go:164] Run: docker network inspect newest-cni-601829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:54:14.588365 1952673 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 11:54:14.593154 1952673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:54:14.607614 1952673 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 11:54:10.674768 1949672 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00125244s
	I1217 11:54:10.677769 1949672 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 11:54:10.677900 1949672 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1217 11:54:10.678018 1949672 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 11:54:10.678095 1949672 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 11:54:12.221193 1949672 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.543360868s
	I1217 11:54:13.132964 1949672 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.455221403s
	I1217 11:54:15.179643 1949672 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501810323s
	I1217 11:54:15.198199 1949672 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:54:15.210401 1949672 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:54:15.219215 1949672 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:54:15.219595 1949672 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-382022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:54:15.230278 1949672 kubeadm.go:319] [bootstrap-token] Using token: 18j5vb.prvjek6drow03x0n
	W1217 11:54:11.040124 1943967 node_ready.go:57] node "embed-certs-542273" has "Ready":"False" status (will retry)
	W1217 11:54:13.539330 1943967 node_ready.go:57] node "embed-certs-542273" has "Ready":"False" status (will retry)
	I1217 11:54:14.608788 1952673 kubeadm.go:884] updating cluster {Name:newest-cni-601829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:54:14.608936 1952673 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:54:14.609001 1952673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:54:14.652078 1952673 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:54:14.652104 1952673 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:54:14.652164 1952673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:54:14.687496 1952673 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:54:14.687525 1952673 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:54:14.687568 1952673 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 11:54:14.687682 1952673 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-601829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:54:14.687756 1952673 ssh_runner.go:195] Run: crio config
	I1217 11:54:14.742034 1952673 cni.go:84] Creating CNI manager for ""
	I1217 11:54:14.742062 1952673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:14.742083 1952673 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 11:54:14.742111 1952673 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-601829 NodeName:newest-cni-601829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:54:14.742272 1952673 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-601829"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:54:14.742362 1952673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:54:14.750995 1952673 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:54:14.751068 1952673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:54:14.759143 1952673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 11:54:14.772329 1952673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:54:14.789302 1952673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1217 11:54:14.805896 1952673 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:54:14.809821 1952673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:54:14.820272 1952673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:14.907791 1952673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:54:14.938715 1952673 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829 for IP: 192.168.85.2
	I1217 11:54:14.938738 1952673 certs.go:195] generating shared ca certs ...
	I1217 11:54:14.938761 1952673 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:14.938910 1952673 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:54:14.938956 1952673 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:54:14.938966 1952673 certs.go:257] generating profile certs ...
	I1217 11:54:14.939041 1952673 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.key
	I1217 11:54:14.939067 1952673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.crt with IP's: []
	I1217 11:54:15.002286 1952673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.crt ...
	I1217 11:54:15.002315 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.crt: {Name:mkb123ab6040f3a23d0c5bc4863b7319ee083bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.002485 1952673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.key ...
	I1217 11:54:15.002496 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/client.key: {Name:mk3e7b7710383da310c6507eba0176edaaab2dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.002610 1952673 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key.fded5a9c
	I1217 11:54:15.002628 1952673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt.fded5a9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 11:54:15.135575 1952673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt.fded5a9c ...
	I1217 11:54:15.135607 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt.fded5a9c: {Name:mk2201be273856597b4d2ae93ea533ac20a42c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.135777 1952673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key.fded5a9c ...
	I1217 11:54:15.135790 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key.fded5a9c: {Name:mk7b1d8b442a5de29a306b93e98efce5c9fba488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.135872 1952673 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt.fded5a9c -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt
	I1217 11:54:15.135948 1952673 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key.fded5a9c -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key
	I1217 11:54:15.136019 1952673 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.key
	I1217 11:54:15.136035 1952673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.crt with IP's: []
	I1217 11:54:15.182803 1952673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.crt ...
	I1217 11:54:15.182833 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.crt: {Name:mk251d73ccfaf6668c2ffd35a465891b1c2424b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.182998 1952673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.key ...
	I1217 11:54:15.183017 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.key: {Name:mk70c91c0d7d67b0b7a8ca66d601cb6b7aac8ad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:15.183226 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:54:15.183282 1952673 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:54:15.183300 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:54:15.183343 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:54:15.183385 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:54:15.183424 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:54:15.183485 1952673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:15.184142 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:54:15.206920 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:54:15.230085 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:54:15.253075 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:54:15.272687 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:54:15.290673 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:54:15.309839 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:54:15.328916 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 11:54:15.348364 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:54:15.367098 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:54:15.385130 1952673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:54:15.405047 1952673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:54:15.418199 1952673 ssh_runner.go:195] Run: openssl version
	I1217 11:54:15.425226 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:54:15.432920 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:54:15.440771 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:54:15.445068 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:54:15.445119 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:54:15.483365 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:54:15.491381 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:15.499051 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:54:15.507373 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:15.511249 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:15.511306 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:54:15.548124 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:54:15.557121 1952673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:54:15.564716 1952673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:54:15.572596 1952673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:54:15.577173 1952673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:54:15.577223 1952673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:54:15.616973 1952673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:54:15.628112 1952673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:54:15.633805 1952673 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:54:15.633874 1952673 kubeadm.go:401] StartCluster: {Name:newest-cni-601829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:15.633980 1952673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:54:15.634044 1952673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:54:15.666106 1952673 cri.go:89] found id: ""
	I1217 11:54:15.666194 1952673 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:54:15.674869 1952673 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:54:15.683089 1952673 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:54:15.683173 1952673 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:54:15.691133 1952673 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:54:15.691156 1952673 kubeadm.go:158] found existing configuration files:
	
	I1217 11:54:15.691208 1952673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:54:15.699133 1952673 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:54:15.699195 1952673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:54:15.707630 1952673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:54:15.716211 1952673 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:54:15.716270 1952673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:54:15.723991 1952673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:54:15.732030 1952673 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:54:15.732091 1952673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:54:15.740086 1952673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:54:15.749785 1952673 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:54:15.749851 1952673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:54:15.759003 1952673 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:54:15.802551 1952673 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:54:15.802633 1952673 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:54:15.896842 1952673 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:54:15.896913 1952673 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:54:15.896945 1952673 kubeadm.go:319] OS: Linux
	I1217 11:54:15.896985 1952673 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:54:15.897045 1952673 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:54:15.897133 1952673 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:54:15.897249 1952673 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:54:15.897336 1952673 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:54:15.897416 1952673 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:54:15.897493 1952673 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:54:15.897614 1952673 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 11:54:15.962567 1952673 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:54:15.962703 1952673 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:54:15.962857 1952673 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:54:15.984824 1952673 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:54:15.232362 1949672 out.go:252]   - Configuring RBAC rules ...
	I1217 11:54:15.232571 1949672 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:54:15.235683 1949672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:54:15.241766 1949672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:54:15.244638 1949672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:54:15.247503 1949672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:54:15.250778 1949672 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:54:15.587309 1949672 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:54:16.005121 1949672 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:54:16.586080 1949672 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:54:16.587160 1949672 kubeadm.go:319] 
	I1217 11:54:16.587265 1949672 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:54:16.587283 1949672 kubeadm.go:319] 
	I1217 11:54:16.587404 1949672 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:54:16.587422 1949672 kubeadm.go:319] 
	I1217 11:54:16.587471 1949672 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:54:16.587572 1949672 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:54:16.587653 1949672 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:54:16.587664 1949672 kubeadm.go:319] 
	I1217 11:54:16.587749 1949672 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:54:16.587759 1949672 kubeadm.go:319] 
	I1217 11:54:16.587827 1949672 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:54:16.587837 1949672 kubeadm.go:319] 
	I1217 11:54:16.587906 1949672 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:54:16.588006 1949672 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:54:16.588072 1949672 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:54:16.588078 1949672 kubeadm.go:319] 
	I1217 11:54:16.588154 1949672 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:54:16.588241 1949672 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:54:16.588267 1949672 kubeadm.go:319] 
	I1217 11:54:16.588342 1949672 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 18j5vb.prvjek6drow03x0n \
	I1217 11:54:16.588433 1949672 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:54:16.588459 1949672 kubeadm.go:319] 	--control-plane 
	I1217 11:54:16.588469 1949672 kubeadm.go:319] 
	I1217 11:54:16.588560 1949672 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:54:16.588569 1949672 kubeadm.go:319] 
	I1217 11:54:16.588669 1949672 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 18j5vb.prvjek6drow03x0n \
	I1217 11:54:16.588781 1949672 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:54:16.591660 1949672 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:54:16.591793 1949672 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:54:16.591823 1949672 cni.go:84] Creating CNI manager for ""
	I1217 11:54:16.591840 1949672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:16.594322 1949672 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 11:54:15.986950 1952673 out.go:252]   - Generating certificates and keys ...
	I1217 11:54:15.987059 1952673 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:54:15.989058 1952673 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:54:16.052493 1952673 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:54:16.087998 1952673 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:54:16.320203 1952673 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:54:16.510181 1952673 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:54:16.624810 1952673 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:54:16.625028 1952673 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-601829] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:54:16.722070 1952673 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:54:16.722281 1952673 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-601829] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:54:16.805502 1952673 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:54:16.882349 1952673 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:54:16.913704 1952673 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:54:16.913837 1952673 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:54:17.280943 1952673 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:54:17.360084 1952673 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:54:17.475256 1952673 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:54:17.716171 1952673 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:54:17.898338 1952673 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:54:17.898971 1952673 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:54:17.903068 1952673 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:54:17.904547 1952673 out.go:252]   - Booting up control plane ...
	I1217 11:54:17.904654 1952673 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:54:17.904723 1952673 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:54:17.905383 1952673 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:54:17.920082 1952673 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:54:17.920204 1952673 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:54:17.927602 1952673 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:54:17.927944 1952673 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:54:17.928020 1952673 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:54:18.044594 1952673 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:54:18.044705 1952673 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:54:16.595304 1949672 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:54:16.599717 1949672 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 11:54:16.599734 1949672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:54:16.614847 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:54:16.838248 1949672 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:54:16.838518 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-382022 minikube.k8s.io/updated_at=2025_12_17T11_54_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=default-k8s-diff-port-382022 minikube.k8s.io/primary=true
	I1217 11:54:16.838573 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:16.851345 1949672 ops.go:34] apiserver oom_adj: -16
	I1217 11:54:16.923207 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:17.424221 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:17.923417 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:18.423853 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:18.924195 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:19.424070 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:19.923885 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 11:54:16.038490 1943967 node_ready.go:57] node "embed-certs-542273" has "Ready":"False" status (will retry)
	W1217 11:54:18.038752 1943967 node_ready.go:57] node "embed-certs-542273" has "Ready":"False" status (will retry)
	W1217 11:54:20.040515 1943967 node_ready.go:57] node "embed-certs-542273" has "Ready":"False" status (will retry)
	I1217 11:54:20.424003 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:20.924077 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:21.424694 1949672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:21.506730 1949672 kubeadm.go:1114] duration metric: took 4.668466133s to wait for elevateKubeSystemPrivileges
	I1217 11:54:21.506770 1949672 kubeadm.go:403] duration metric: took 14.955794098s to StartCluster
	I1217 11:54:21.506793 1949672 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:21.506897 1949672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:21.508757 1949672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:21.509017 1949672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:54:21.509046 1949672 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:54:21.509125 1949672 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:54:21.509272 1949672 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-382022"
	I1217 11:54:21.509302 1949672 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:54:21.509302 1949672 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-382022"
	I1217 11:54:21.509342 1949672 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-382022"
	I1217 11:54:21.509308 1949672 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-382022"
	I1217 11:54:21.509396 1949672 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:54:21.509798 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:21.509989 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:21.510630 1949672 out.go:179] * Verifying Kubernetes components...
	I1217 11:54:21.512170 1949672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:21.539888 1949672 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:54:21.541227 1949672 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:54:21.541249 1949672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:54:21.541318 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:21.541550 1949672 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-382022"
	I1217 11:54:21.541595 1949672 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:54:21.542100 1949672 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:54:21.571890 1949672 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:54:21.571915 1949672 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:54:21.571979 1949672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:54:21.580691 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:21.606881 1949672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:54:21.626407 1949672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:54:21.703715 1949672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:54:21.711434 1949672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:54:21.731973 1949672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:54:21.827947 1949672 start.go:1013] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 11:54:21.831517 1949672 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-382022" to be "Ready" ...
	I1217 11:54:22.077861 1949672 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 11:54:18.546353 1952673 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.881861ms
	I1217 11:54:18.549247 1952673 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 11:54:18.549365 1952673 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1217 11:54:18.549491 1952673 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 11:54:18.549621 1952673 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 11:54:19.555157 1952673 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005759004s
	I1217 11:54:20.182847 1952673 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.633392056s
	I1217 11:54:22.052028 1952673 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502166221s
	I1217 11:54:22.073347 1952673 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:54:22.086259 1952673 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:54:22.096361 1952673 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:54:22.096703 1952673 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-601829 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:54:22.105184 1952673 kubeadm.go:319] [bootstrap-token] Using token: oaw54k.najt1dba7pujt8tu
	I1217 11:54:22.106541 1952673 out.go:252]   - Configuring RBAC rules ...
	I1217 11:54:22.106730 1952673 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:54:22.110907 1952673 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:54:22.116298 1952673 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:54:22.118994 1952673 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:54:22.121698 1952673 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:54:22.124463 1952673 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:54:22.459646 1952673 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:54:22.886676 1952673 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:54:23.459022 1952673 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:54:23.459933 1952673 kubeadm.go:319] 
	I1217 11:54:23.460022 1952673 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:54:23.460033 1952673 kubeadm.go:319] 
	I1217 11:54:23.460142 1952673 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:54:23.460153 1952673 kubeadm.go:319] 
	I1217 11:54:23.460188 1952673 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:54:23.460301 1952673 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:54:23.460391 1952673 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:54:23.460407 1952673 kubeadm.go:319] 
	I1217 11:54:23.460489 1952673 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:54:23.460497 1952673 kubeadm.go:319] 
	I1217 11:54:23.460582 1952673 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:54:23.460593 1952673 kubeadm.go:319] 
	I1217 11:54:23.460666 1952673 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:54:23.460777 1952673 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:54:23.460879 1952673 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:54:23.460891 1952673 kubeadm.go:319] 
	I1217 11:54:23.461018 1952673 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:54:23.461125 1952673 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:54:23.461139 1952673 kubeadm.go:319] 
	I1217 11:54:23.461260 1952673 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token oaw54k.najt1dba7pujt8tu \
	I1217 11:54:23.461395 1952673 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:54:23.461428 1952673 kubeadm.go:319] 	--control-plane 
	I1217 11:54:23.461441 1952673 kubeadm.go:319] 
	I1217 11:54:23.461553 1952673 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:54:23.461561 1952673 kubeadm.go:319] 
	I1217 11:54:23.461658 1952673 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token oaw54k.najt1dba7pujt8tu \
	I1217 11:54:23.461782 1952673 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:54:23.464548 1952673 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:54:23.464732 1952673 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:54:23.464780 1952673 cni.go:84] Creating CNI manager for ""
	I1217 11:54:23.464802 1952673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:23.466819 1952673 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 11:54:22.042097 1943967 node_ready.go:49] node "embed-certs-542273" is "Ready"
	I1217 11:54:22.042133 1943967 node_ready.go:38] duration metric: took 13.006602171s for node "embed-certs-542273" to be "Ready" ...
	I1217 11:54:22.042150 1943967 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:54:22.042209 1943967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:54:22.058523 1943967 api_server.go:72] duration metric: took 13.442450629s to wait for apiserver process to appear ...
	I1217 11:54:22.058588 1943967 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:54:22.058612 1943967 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 11:54:22.063740 1943967 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 11:54:22.064973 1943967 api_server.go:141] control plane version: v1.34.3
	I1217 11:54:22.065009 1943967 api_server.go:131] duration metric: took 6.412111ms to wait for apiserver health ...
	I1217 11:54:22.065020 1943967 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:54:22.071869 1943967 system_pods.go:59] 8 kube-system pods found
	I1217 11:54:22.072370 1943967 system_pods.go:61] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:22.072391 1943967 system_pods.go:61] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running
	I1217 11:54:22.072401 1943967 system_pods.go:61] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running
	I1217 11:54:22.072407 1943967 system_pods.go:61] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running
	I1217 11:54:22.072418 1943967 system_pods.go:61] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running
	I1217 11:54:22.072424 1943967 system_pods.go:61] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running
	I1217 11:54:22.072430 1943967 system_pods.go:61] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running
	I1217 11:54:22.072442 1943967 system_pods.go:61] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:22.072451 1943967 system_pods.go:74] duration metric: took 7.423552ms to wait for pod list to return data ...
	I1217 11:54:22.072465 1943967 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:54:22.076068 1943967 default_sa.go:45] found service account: "default"
	I1217 11:54:22.076092 1943967 default_sa.go:55] duration metric: took 3.6199ms for default service account to be created ...
	I1217 11:54:22.076103 1943967 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:54:22.078986 1943967 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:22.079030 1943967 system_pods.go:89] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:22.079038 1943967 system_pods.go:89] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running
	I1217 11:54:22.079046 1943967 system_pods.go:89] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running
	I1217 11:54:22.079051 1943967 system_pods.go:89] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running
	I1217 11:54:22.079057 1943967 system_pods.go:89] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running
	I1217 11:54:22.079062 1943967 system_pods.go:89] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running
	I1217 11:54:22.079074 1943967 system_pods.go:89] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running
	I1217 11:54:22.079081 1943967 system_pods.go:89] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:22.079122 1943967 retry.go:31] will retry after 311.169713ms: missing components: kube-dns
	I1217 11:54:22.395026 1943967 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:22.395065 1943967 system_pods.go:89] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:22.395072 1943967 system_pods.go:89] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running
	I1217 11:54:22.395082 1943967 system_pods.go:89] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running
	I1217 11:54:22.395088 1943967 system_pods.go:89] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running
	I1217 11:54:22.395095 1943967 system_pods.go:89] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running
	I1217 11:54:22.395100 1943967 system_pods.go:89] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running
	I1217 11:54:22.395105 1943967 system_pods.go:89] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running
	I1217 11:54:22.395112 1943967 system_pods.go:89] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:22.395136 1943967 retry.go:31] will retry after 316.044348ms: missing components: kube-dns
	I1217 11:54:22.716898 1943967 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:22.716937 1943967 system_pods.go:89] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:22.716947 1943967 system_pods.go:89] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running
	I1217 11:54:22.717029 1943967 system_pods.go:89] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running
	I1217 11:54:22.717058 1943967 system_pods.go:89] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running
	I1217 11:54:22.717065 1943967 system_pods.go:89] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running
	I1217 11:54:22.717071 1943967 system_pods.go:89] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running
	I1217 11:54:22.717076 1943967 system_pods.go:89] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running
	I1217 11:54:22.717085 1943967 system_pods.go:89] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:22.717134 1943967 retry.go:31] will retry after 339.121376ms: missing components: kube-dns
	I1217 11:54:23.061898 1943967 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:23.061933 1943967 system_pods.go:89] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Running
	I1217 11:54:23.061942 1943967 system_pods.go:89] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running
	I1217 11:54:23.061947 1943967 system_pods.go:89] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running
	I1217 11:54:23.061952 1943967 system_pods.go:89] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running
	I1217 11:54:23.061959 1943967 system_pods.go:89] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running
	I1217 11:54:23.061964 1943967 system_pods.go:89] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running
	I1217 11:54:23.061969 1943967 system_pods.go:89] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running
	I1217 11:54:23.061974 1943967 system_pods.go:89] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Running
	I1217 11:54:23.061986 1943967 system_pods.go:126] duration metric: took 985.875337ms to wait for k8s-apps to be running ...
	I1217 11:54:23.061999 1943967 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:54:23.062060 1943967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:54:23.080099 1943967 system_svc.go:56] duration metric: took 18.092358ms WaitForService to wait for kubelet
	I1217 11:54:23.080130 1943967 kubeadm.go:587] duration metric: took 14.464065342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:54:23.080155 1943967 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:54:23.083741 1943967 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:54:23.083773 1943967 node_conditions.go:123] node cpu capacity is 8
	I1217 11:54:23.083792 1943967 node_conditions.go:105] duration metric: took 3.63142ms to run NodePressure ...
	I1217 11:54:23.083810 1943967 start.go:242] waiting for startup goroutines ...
	I1217 11:54:23.083826 1943967 start.go:247] waiting for cluster config update ...
	I1217 11:54:23.083840 1943967 start.go:256] writing updated cluster config ...
	I1217 11:54:23.084124 1943967 ssh_runner.go:195] Run: rm -f paused
	I1217 11:54:23.089317 1943967 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:54:23.093755 1943967 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t66bd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:23.098783 1943967 pod_ready.go:94] pod "coredns-66bc5c9577-t66bd" is "Ready"
	I1217 11:54:23.098808 1943967 pod_ready.go:86] duration metric: took 5.026696ms for pod "coredns-66bc5c9577-t66bd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:23.101270 1943967 pod_ready.go:83] waiting for pod "etcd-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:23.107024 1943967 pod_ready.go:94] pod "etcd-embed-certs-542273" is "Ready"
	I1217 11:54:23.107055 1943967 pod_ready.go:86] duration metric: took 5.757627ms for pod "etcd-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:23.109591 1943967 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:23.114001 1943967 pod_ready.go:94] pod "kube-apiserver-embed-certs-542273" is "Ready"
	I1217 11:54:23.114027 1943967 pod_ready.go:86] duration metric: took 4.355034ms for pod "kube-apiserver-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:23.116046 1943967 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:23.494872 1943967 pod_ready.go:94] pod "kube-controller-manager-embed-certs-542273" is "Ready"
	I1217 11:54:23.494904 1943967 pod_ready.go:86] duration metric: took 378.835183ms for pod "kube-controller-manager-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:23.697478 1943967 pod_ready.go:83] waiting for pod "kube-proxy-gfbw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:24.093987 1943967 pod_ready.go:94] pod "kube-proxy-gfbw9" is "Ready"
	I1217 11:54:24.094016 1943967 pod_ready.go:86] duration metric: took 396.505748ms for pod "kube-proxy-gfbw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:24.294418 1943967 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:24.693839 1943967 pod_ready.go:94] pod "kube-scheduler-embed-certs-542273" is "Ready"
	I1217 11:54:24.693866 1943967 pod_ready.go:86] duration metric: took 399.418875ms for pod "kube-scheduler-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:54:24.693881 1943967 pod_ready.go:40] duration metric: took 1.604533507s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:54:24.748442 1943967 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:54:24.750758 1943967 out.go:179] * Done! kubectl is now configured to use "embed-certs-542273" cluster and "default" namespace by default
	I1217 11:54:22.078993 1949672 addons.go:530] duration metric: took 569.861432ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 11:54:22.333466 1949672 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-382022" context rescaled to 1 replicas
	W1217 11:54:23.841148 1949672 node_ready.go:57] node "default-k8s-diff-port-382022" has "Ready":"False" status (will retry)
	I1217 11:54:23.468045 1952673 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:54:23.473157 1952673 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1217 11:54:23.473177 1952673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:54:23.489874 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:54:23.796888 1952673 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:54:23.796985 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:23.797115 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-601829 minikube.k8s.io/updated_at=2025_12_17T11_54_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=newest-cni-601829 minikube.k8s.io/primary=true
	I1217 11:54:23.917973 1952673 ops.go:34] apiserver oom_adj: -16
	I1217 11:54:23.918016 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:24.418942 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:24.919418 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:25.418337 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:25.918701 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:26.418197 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:26.918936 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:27.418152 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:27.918518 1952673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:54:28.000100 1952673 kubeadm.go:1114] duration metric: took 4.203162629s to wait for elevateKubeSystemPrivileges
	I1217 11:54:28.000144 1952673 kubeadm.go:403] duration metric: took 12.366278135s to StartCluster
	I1217 11:54:28.000168 1952673 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:28.000254 1952673 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:28.002958 1952673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:54:28.003227 1952673 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:54:28.003244 1952673 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:54:28.003335 1952673 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:54:28.003426 1952673 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-601829"
	I1217 11:54:28.003446 1952673 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-601829"
	I1217 11:54:28.003468 1952673 addons.go:70] Setting default-storageclass=true in profile "newest-cni-601829"
	I1217 11:54:28.003484 1952673 host.go:66] Checking if "newest-cni-601829" exists ...
	I1217 11:54:28.003496 1952673 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-601829"
	I1217 11:54:28.003516 1952673 config.go:182] Loaded profile config "newest-cni-601829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:28.003869 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:28.004082 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:28.008658 1952673 out.go:179] * Verifying Kubernetes components...
	I1217 11:54:28.010043 1952673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:54:28.028837 1952673 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:54:28.030468 1952673 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:54:28.030493 1952673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:54:28.030568 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:28.031207 1952673 addons.go:239] Setting addon default-storageclass=true in "newest-cni-601829"
	I1217 11:54:28.031246 1952673 host.go:66] Checking if "newest-cni-601829" exists ...
	I1217 11:54:28.031598 1952673 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:28.062197 1952673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:28.065117 1952673 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:54:28.065143 1952673 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:54:28.065210 1952673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:28.089565 1952673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:28.102827 1952673 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:54:28.155109 1952673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:54:28.180269 1952673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:54:28.198602 1952673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:54:28.293365 1952673 start.go:1013] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 11:54:28.295081 1952673 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:54:28.295131 1952673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:54:28.510064 1952673 api_server.go:72] duration metric: took 506.780418ms to wait for apiserver process to appear ...
	I1217 11:54:28.510095 1952673 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:54:28.510114 1952673 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:54:28.515766 1952673 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 11:54:28.516957 1952673 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 11:54:28.517032 1952673 api_server.go:131] duration metric: took 6.929175ms to wait for apiserver health ...
	I1217 11:54:28.517055 1952673 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:54:28.519284 1952673 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 11:54:28.520333 1952673 addons.go:530] duration metric: took 516.993745ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 11:54:28.521561 1952673 system_pods.go:59] 8 kube-system pods found
	I1217 11:54:28.521592 1952673 system_pods.go:61] "coredns-7d764666f9-jwmxw" [1daf4bf2-080a-49a2-ad9f-fea9cdbc571b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 11:54:28.521601 1952673 system_pods.go:61] "etcd-newest-cni-601829" [d71be3a5-4bd0-47e7-98ea-b50d6c2abd0a] Running
	I1217 11:54:28.521611 1952673 system_pods.go:61] "kindnet-t6q5x" [6c3deb88-31c5-4008-aae7-7467aa3f9e81] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 11:54:28.521625 1952673 system_pods.go:61] "kube-apiserver-newest-cni-601829" [eb175f99-213c-4663-bbf7-43c54202dbba] Running
	I1217 11:54:28.521631 1952673 system_pods.go:61] "kube-controller-manager-newest-cni-601829" [f9d7a310-c545-49de-9def-714ba54d3bbb] Running
	I1217 11:54:28.521639 1952673 system_pods.go:61] "kube-proxy-grz2c" [35f43b51-b45f-4c1c-a95f-3a34192b4334] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:54:28.521647 1952673 system_pods.go:61] "kube-scheduler-newest-cni-601829" [79ecb056-ebc4-4c51-85a4-727a2d633751] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:54:28.521654 1952673 system_pods.go:61] "storage-provisioner" [3e2c9b6f-d0cc-48bc-ba8d-6da58cb1968d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 11:54:28.521663 1952673 system_pods.go:74] duration metric: took 4.590665ms to wait for pod list to return data ...
	I1217 11:54:28.521672 1952673 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:54:28.525338 1952673 default_sa.go:45] found service account: "default"
	I1217 11:54:28.525358 1952673 default_sa.go:55] duration metric: took 3.679177ms for default service account to be created ...
	I1217 11:54:28.525373 1952673 kubeadm.go:587] duration metric: took 522.093751ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:54:28.525391 1952673 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:54:28.596300 1952673 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:54:28.596327 1952673 node_conditions.go:123] node cpu capacity is 8
	I1217 11:54:28.596340 1952673 node_conditions.go:105] duration metric: took 70.944328ms to run NodePressure ...
	I1217 11:54:28.596355 1952673 start.go:242] waiting for startup goroutines ...
	I1217 11:54:28.799195 1952673 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-601829" context rescaled to 1 replicas
	I1217 11:54:28.799236 1952673 start.go:247] waiting for cluster config update ...
	I1217 11:54:28.799253 1952673 start.go:256] writing updated cluster config ...
	I1217 11:54:28.799577 1952673 ssh_runner.go:195] Run: rm -f paused
	I1217 11:54:28.857192 1952673 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 11:54:28.859290 1952673 out.go:179] * Done! kubectl is now configured to use "newest-cni-601829" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 11:54:18 newest-cni-601829 crio[813]: time="2025-12-17T11:54:18.921621962Z" level=info msg="Started container" PID=1252 containerID=a93274d759e1f11dd8d3a7a190b7c37c991dbc89502f410e62b3f637e098118d description=kube-system/kube-controller-manager-newest-cni-601829/kube-controller-manager id=102a98ef-4429-4a86-8713-07d16d00913b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a193dd2beb70b998bec2d2d413302f5d8944185dbd5aaca6ea5a037e61c63d1c
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.288008151Z" level=info msg="Running pod sandbox: kube-system/kindnet-t6q5x/POD" id=a87e0422-ea69-4b74-aa7d-f977ac842642 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.288088354Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.289707798Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-grz2c/POD" id=e9715e89-5693-466f-8e0c-3595a1567b9b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.289787516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.291258056Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a87e0422-ea69-4b74-aa7d-f977ac842642 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.294904516Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.297210042Z" level=info msg="Ran pod sandbox 3d95217b2caba63c78f0d0ab12f2ffb30982a1b1413c14aa642ec5587ba87807 with infra container: kube-system/kindnet-t6q5x/POD" id=a87e0422-ea69-4b74-aa7d-f977ac842642 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.297227313Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e9715e89-5693-466f-8e0c-3595a1567b9b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.299125834Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=4b8a4532-4fd9-4519-82f5-c169d6d02acd name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.299261287Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=4b8a4532-4fd9-4519-82f5-c169d6d02acd name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.299309442Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=4b8a4532-4fd9-4519-82f5-c169d6d02acd name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.300160116Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.30087036Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=347c9716-175e-4e0e-b121-3a0dec2f6e67 name=/runtime.v1.ImageService/PullImage
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.301135195Z" level=info msg="Ran pod sandbox 7374bdd58a2c868c65e29e9091e6675ecfb10b72d2b72524987e05fa4cc13d8d with infra container: kube-system/kube-proxy-grz2c/POD" id=e9715e89-5693-466f-8e0c-3595a1567b9b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.303404787Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ab7f598c-2c90-42f4-87b2-8e03aa1ecf72 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.303581077Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.304623166Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b2fd2d65-e6bf-4343-8e15-017053a54794 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.309152428Z" level=info msg="Creating container: kube-system/kube-proxy-grz2c/kube-proxy" id=19afde2b-568c-4ecb-90f1-e67b0896db56 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.309280351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.315218262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.315954574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.357624011Z" level=info msg="Created container 40e5ac03e745de7fbb90e003b9164ace8ff0eda7c0b581eee884103877287756: kube-system/kube-proxy-grz2c/kube-proxy" id=19afde2b-568c-4ecb-90f1-e67b0896db56 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.358471654Z" level=info msg="Starting container: 40e5ac03e745de7fbb90e003b9164ace8ff0eda7c0b581eee884103877287756" id=f80da71b-7936-4b9c-8760-e98674bf190a name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:54:28 newest-cni-601829 crio[813]: time="2025-12-17T11:54:28.361943897Z" level=info msg="Started container" PID=1611 containerID=40e5ac03e745de7fbb90e003b9164ace8ff0eda7c0b581eee884103877287756 description=kube-system/kube-proxy-grz2c/kube-proxy id=f80da71b-7936-4b9c-8760-e98674bf190a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7374bdd58a2c868c65e29e9091e6675ecfb10b72d2b72524987e05fa4cc13d8d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	40e5ac03e745d       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   1 second ago        Running             kube-proxy                0                   7374bdd58a2c8       kube-proxy-grz2c                            kube-system
	6191e94d7bc78       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   11 seconds ago      Running             kube-scheduler            0                   19739665ff56f       kube-scheduler-newest-cni-601829            kube-system
	a93274d759e1f       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   11 seconds ago      Running             kube-controller-manager   0                   a193dd2beb70b       kube-controller-manager-newest-cni-601829   kube-system
	1708d29374091       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   11 seconds ago      Running             kube-apiserver            0                   33da3ae245c90       kube-apiserver-newest-cni-601829            kube-system
	9d6ea3ab7e629       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   11 seconds ago      Running             etcd                      0                   f840ef48200b7       etcd-newest-cni-601829                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-601829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-601829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=newest-cni-601829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_54_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:54:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-601829
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:54:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:54:22 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:54:22 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:54:22 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 11:54:22 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-601829
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                bf12f87d-6e9a-4666-9ac7-1005cb2f7e7a
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-601829                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-t6q5x                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-601829             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-601829    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-grz2c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-601829             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-601829 event: Registered Node newest-cni-601829 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [9d6ea3ab7e629665455f9a370f0eb3e09f569281c124523b880e377da2918b3d] <==
	{"level":"info","ts":"2025-12-17T11:54:18.965753Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T11:54:19.054432Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-17T11:54:19.054508Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-17T11:54:19.054577Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-17T11:54:19.054591Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:54:19.054615Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:19.055136Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:19.055173Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:54:19.055200Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:19.055214Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:19.055903Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-601829 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T11:54:19.055912Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:54:19.055944Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T11:54:19.055928Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:54:19.056338Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T11:54:19.056398Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T11:54:19.056706Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T11:54:19.057089Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:54:19.057112Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:54:19.057030Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T11:54:19.057521Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T11:54:19.057592Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-17T11:54:19.057752Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-17T11:54:19.060613Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-17T11:54:19.060616Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:54:30 up  5:36,  0 user,  load average: 5.80, 3.52, 2.26
	Linux newest-cni-601829 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [1708d2937409165d0e424c4f9262dc94ab2c4c14fa317e374ed8f81f206e069c] <==
	I1217 11:54:20.227079       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:54:20.227085       1 cache.go:39] Caches are synced for autoregister controller
	I1217 11:54:20.227901       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:54:20.233377       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:20.233417       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1217 11:54:20.240165       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:20.260913       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 11:54:20.426378       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:54:21.128692       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1217 11:54:21.133102       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1217 11:54:21.133121       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 11:54:21.727001       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:54:21.778426       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:54:21.931302       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 11:54:21.938600       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1217 11:54:21.939611       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:54:21.944399       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:54:22.159239       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:54:22.872923       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:54:22.885284       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 11:54:22.894021       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 11:54:27.812944       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:27.816649       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:27.910824       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:54:27.960203       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a93274d759e1f11dd8d3a7a190b7c37c991dbc89502f410e62b3f637e098118d] <==
	I1217 11:54:26.966227       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.967846       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.967955       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.968037       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.968055       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.968071       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.968074       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.968084       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.968093       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.968103       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.968116       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.968132       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.968138       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.969207       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.969168       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 11:54:26.969383       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.969402       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-601829"
	I1217 11:54:26.969447       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 11:54:26.970582       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:26.970661       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:26.979172       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-601829" podCIDRs=["10.42.0.0/24"]
	I1217 11:54:27.065793       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:27.065818       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 11:54:27.065823       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 11:54:27.071317       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [40e5ac03e745de7fbb90e003b9164ace8ff0eda7c0b581eee884103877287756] <==
	I1217 11:54:28.401398       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:54:28.487006       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:28.587750       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:28.587795       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 11:54:28.587921       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:54:28.607634       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:54:28.607689       1 server_linux.go:136] "Using iptables Proxier"
	I1217 11:54:28.612810       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:54:28.613118       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 11:54:28.613132       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:28.614162       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:54:28.614190       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:54:28.614246       1 config.go:200] "Starting service config controller"
	I1217 11:54:28.614257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:54:28.614259       1 config.go:309] "Starting node config controller"
	I1217 11:54:28.614274       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:54:28.614311       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:54:28.614368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:54:28.714836       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:54:28.714843       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:54:28.714863       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 11:54:28.714886       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [6191e94d7bc785cf235224d1741d3c514b49dd988a75c6ff2278f6ccf765bd14] <==
	E1217 11:54:20.183288       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 11:54:20.183482       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 11:54:20.183832       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 11:54:20.183861       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 11:54:20.183941       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 11:54:20.183946       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 11:54:20.183991       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 11:54:20.184005       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 11:54:20.184037       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1217 11:54:20.184124       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 11:54:20.996590       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 11:54:21.040095       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 11:54:21.049167       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 11:54:21.160589       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 11:54:21.198107       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 11:54:21.253528       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 11:54:21.276248       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 11:54:21.290566       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 11:54:21.359975       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 11:54:21.367950       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 11:54:21.410650       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 11:54:21.435086       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 11:54:21.488969       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1217 11:54:21.739758       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1217 11:54:24.378038       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 11:54:23 newest-cni-601829 kubelet[1334]: E1217 11:54:23.777125    1334 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-601829\" already exists" pod="kube-system/etcd-newest-cni-601829"
	Dec 17 11:54:23 newest-cni-601829 kubelet[1334]: E1217 11:54:23.777228    1334 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-601829" containerName="etcd"
	Dec 17 11:54:23 newest-cni-601829 kubelet[1334]: E1217 11:54:23.780439    1334 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-601829\" already exists" pod="kube-system/kube-apiserver-newest-cni-601829"
	Dec 17 11:54:23 newest-cni-601829 kubelet[1334]: E1217 11:54:23.780546    1334 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-601829" containerName="kube-apiserver"
	Dec 17 11:54:23 newest-cni-601829 kubelet[1334]: I1217 11:54:23.834765    1334 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-601829" podStartSLOduration=1.834744498 podStartE2EDuration="1.834744498s" podCreationTimestamp="2025-12-17 11:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:23.832428877 +0000 UTC m=+1.194249859" watchObservedRunningTime="2025-12-17 11:54:23.834744498 +0000 UTC m=+1.196565484"
	Dec 17 11:54:23 newest-cni-601829 kubelet[1334]: I1217 11:54:23.864153    1334 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-601829" podStartSLOduration=1.864101048 podStartE2EDuration="1.864101048s" podCreationTimestamp="2025-12-17 11:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:23.84866365 +0000 UTC m=+1.210484634" watchObservedRunningTime="2025-12-17 11:54:23.864101048 +0000 UTC m=+1.225922031"
	Dec 17 11:54:23 newest-cni-601829 kubelet[1334]: I1217 11:54:23.879173    1334 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-601829" podStartSLOduration=1.879156249 podStartE2EDuration="1.879156249s" podCreationTimestamp="2025-12-17 11:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:23.878796432 +0000 UTC m=+1.240617416" watchObservedRunningTime="2025-12-17 11:54:23.879156249 +0000 UTC m=+1.240977230"
	Dec 17 11:54:23 newest-cni-601829 kubelet[1334]: I1217 11:54:23.879274    1334 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-601829" podStartSLOduration=1.879268261 podStartE2EDuration="1.879268261s" podCreationTimestamp="2025-12-17 11:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:23.865512881 +0000 UTC m=+1.227333863" watchObservedRunningTime="2025-12-17 11:54:23.879268261 +0000 UTC m=+1.241089244"
	Dec 17 11:54:24 newest-cni-601829 kubelet[1334]: E1217 11:54:24.770627    1334 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-601829" containerName="etcd"
	Dec 17 11:54:24 newest-cni-601829 kubelet[1334]: E1217 11:54:24.770947    1334 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-601829" containerName="kube-apiserver"
	Dec 17 11:54:24 newest-cni-601829 kubelet[1334]: E1217 11:54:24.771167    1334 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-601829" containerName="kube-scheduler"
	Dec 17 11:54:25 newest-cni-601829 kubelet[1334]: E1217 11:54:25.772315    1334 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-601829" containerName="kube-scheduler"
	Dec 17 11:54:25 newest-cni-601829 kubelet[1334]: E1217 11:54:25.772482    1334 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-601829" containerName="kube-apiserver"
	Dec 17 11:54:26 newest-cni-601829 kubelet[1334]: I1217 11:54:26.979065    1334 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 17 11:54:26 newest-cni-601829 kubelet[1334]: I1217 11:54:26.980443    1334 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 17 11:54:28 newest-cni-601829 kubelet[1334]: I1217 11:54:28.065872    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f43b51-b45f-4c1c-a95f-3a34192b4334-lib-modules\") pod \"kube-proxy-grz2c\" (UID: \"35f43b51-b45f-4c1c-a95f-3a34192b4334\") " pod="kube-system/kube-proxy-grz2c"
	Dec 17 11:54:28 newest-cni-601829 kubelet[1334]: I1217 11:54:28.066659    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c3deb88-31c5-4008-aae7-7467aa3f9e81-xtables-lock\") pod \"kindnet-t6q5x\" (UID: \"6c3deb88-31c5-4008-aae7-7467aa3f9e81\") " pod="kube-system/kindnet-t6q5x"
	Dec 17 11:54:28 newest-cni-601829 kubelet[1334]: I1217 11:54:28.066863    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35f43b51-b45f-4c1c-a95f-3a34192b4334-kube-proxy\") pod \"kube-proxy-grz2c\" (UID: \"35f43b51-b45f-4c1c-a95f-3a34192b4334\") " pod="kube-system/kube-proxy-grz2c"
	Dec 17 11:54:28 newest-cni-601829 kubelet[1334]: I1217 11:54:28.066921    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6c3deb88-31c5-4008-aae7-7467aa3f9e81-cni-cfg\") pod \"kindnet-t6q5x\" (UID: \"6c3deb88-31c5-4008-aae7-7467aa3f9e81\") " pod="kube-system/kindnet-t6q5x"
	Dec 17 11:54:28 newest-cni-601829 kubelet[1334]: I1217 11:54:28.066956    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdkh\" (UniqueName: \"kubernetes.io/projected/35f43b51-b45f-4c1c-a95f-3a34192b4334-kube-api-access-vkdkh\") pod \"kube-proxy-grz2c\" (UID: \"35f43b51-b45f-4c1c-a95f-3a34192b4334\") " pod="kube-system/kube-proxy-grz2c"
	Dec 17 11:54:28 newest-cni-601829 kubelet[1334]: I1217 11:54:28.066995    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c3deb88-31c5-4008-aae7-7467aa3f9e81-lib-modules\") pod \"kindnet-t6q5x\" (UID: \"6c3deb88-31c5-4008-aae7-7467aa3f9e81\") " pod="kube-system/kindnet-t6q5x"
	Dec 17 11:54:28 newest-cni-601829 kubelet[1334]: I1217 11:54:28.067021    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8hl9\" (UniqueName: \"kubernetes.io/projected/6c3deb88-31c5-4008-aae7-7467aa3f9e81-kube-api-access-g8hl9\") pod \"kindnet-t6q5x\" (UID: \"6c3deb88-31c5-4008-aae7-7467aa3f9e81\") " pod="kube-system/kindnet-t6q5x"
	Dec 17 11:54:28 newest-cni-601829 kubelet[1334]: I1217 11:54:28.067045    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35f43b51-b45f-4c1c-a95f-3a34192b4334-xtables-lock\") pod \"kube-proxy-grz2c\" (UID: \"35f43b51-b45f-4c1c-a95f-3a34192b4334\") " pod="kube-system/kube-proxy-grz2c"
	Dec 17 11:54:28 newest-cni-601829 kubelet[1334]: I1217 11:54:28.792315    1334 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-grz2c" podStartSLOduration=1.79229408 podStartE2EDuration="1.79229408s" podCreationTimestamp="2025-12-17 11:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:28.792030056 +0000 UTC m=+6.153851038" watchObservedRunningTime="2025-12-17 11:54:28.79229408 +0000 UTC m=+6.154115078"
	Dec 17 11:54:28 newest-cni-601829 kubelet[1334]: E1217 11:54:28.985491    1334 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-601829" containerName="kube-controller-manager"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-601829 -n newest-cni-601829
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-601829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jwmxw storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-601829 describe pod coredns-7d764666f9-jwmxw storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-601829 describe pod coredns-7d764666f9-jwmxw storage-provisioner: exit status 1 (65.183051ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jwmxw" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-601829 describe pod coredns-7d764666f9-jwmxw storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-542273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-542273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.805303ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:54:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-542273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-542273 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-542273 describe deploy/metrics-server -n kube-system: exit status 1 (62.208371ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-542273 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-542273
helpers_test.go:244: (dbg) docker inspect embed-certs-542273:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c",
	        "Created": "2025-12-17T11:53:42.422221245Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1945241,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:53:42.992278985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/hosts",
	        "LogPath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c-json.log",
	        "Name": "/embed-certs-542273",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-542273:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-542273",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c",
	                "LowerDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-542273",
	                "Source": "/var/lib/docker/volumes/embed-certs-542273/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-542273",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-542273",
	                "name.minikube.sigs.k8s.io": "embed-certs-542273",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f34e9e6963f39b48b8922f06c07c7219f4073276fcb475a339e419ae6afb1631",
	            "SandboxKey": "/var/run/docker/netns/f34e9e6963f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34606"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34607"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34610"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34608"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34609"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-542273": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d402fb644edc9023d8248c192d3a2f7035874f1b3b272648cd1fc766ab85445",
	                    "EndpointID": "2cb3a1dd5b9de55b04434bfd642b6b737d2b4db0fa46f66a3448e5c41a033398",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "92:53:66:6c:8d:ec",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-542273",
	                        "b1f11181a02b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-542273 -n embed-certs-542273
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-542273 logs -n 25
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p old-k8s-version-401285 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-401285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:52 UTC │
	│ start   │ -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:52 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p cert-expiration-067996 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                          │ cert-expiration-067996       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p cert-expiration-067996                                                                                                                                                                                                                          │ cert-expiration-067996       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ image   │ old-k8s-version-401285 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ pause   │ -p old-k8s-version-401285 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p kubernetes-upgrade-556754                                                                                                                                                                                                                       │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p disable-driver-mounts-618082                                                                                                                                                                                                                    │ disable-driver-mounts-618082 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ delete  │ -p stopped-upgrade-287611                                                                                                                                                                                                                          │ stopped-upgrade-287611       │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p no-preload-737478 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p newest-cni-601829 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-601829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-542273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:54:34
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:54:34.109012 1960071 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:54:34.109276 1960071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:34.109287 1960071 out.go:374] Setting ErrFile to fd 2...
	I1217 11:54:34.109291 1960071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:34.109493 1960071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:54:34.109962 1960071 out.go:368] Setting JSON to false
	I1217 11:54:34.111221 1960071 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20219,"bootTime":1765952255,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:54:34.111278 1960071 start.go:143] virtualization: kvm guest
	I1217 11:54:34.113528 1960071 out.go:179] * [newest-cni-601829] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:54:34.114792 1960071 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:54:34.114807 1960071 notify.go:221] Checking for updates...
	I1217 11:54:34.118066 1960071 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:54:34.119233 1960071 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:34.120377 1960071 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:54:34.121555 1960071 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:54:34.122732 1960071 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:54:34.124396 1960071 config.go:182] Loaded profile config "newest-cni-601829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:34.125307 1960071 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:54:34.150097 1960071 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:54:34.150223 1960071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:34.208153 1960071 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:54:34.197971123 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:34.208312 1960071 docker.go:319] overlay module found
	I1217 11:54:34.210159 1960071 out.go:179] * Using the docker driver based on existing profile
	I1217 11:54:34.211557 1960071 start.go:309] selected driver: docker
	I1217 11:54:34.211578 1960071 start.go:927] validating driver "docker" against &{Name:newest-cni-601829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:34.211689 1960071 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:54:34.212324 1960071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:34.267069 1960071 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:54:34.257510849 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:34.267372 1960071 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:54:34.267411 1960071 cni.go:84] Creating CNI manager for ""
	I1217 11:54:34.267483 1960071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:34.267528 1960071 start.go:353] cluster config:
	{Name:newest-cni-601829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-601829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:34.270024 1960071 out.go:179] * Starting "newest-cni-601829" primary control-plane node in "newest-cni-601829" cluster
	I1217 11:54:34.271200 1960071 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:54:34.272366 1960071 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:54:34.273381 1960071 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:54:34.273421 1960071 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 11:54:34.273430 1960071 cache.go:65] Caching tarball of preloaded images
	I1217 11:54:34.273482 1960071 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:54:34.273524 1960071 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:54:34.273545 1960071 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 11:54:34.273674 1960071 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/newest-cni-601829/config.json ...
	I1217 11:54:34.294197 1960071 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:54:34.294220 1960071 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:54:34.294238 1960071 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:54:34.294274 1960071 start.go:360] acquireMachinesLock for newest-cni-601829: {Name:mk9faceab19a04d2aa54df7eaada9c8c27536be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:34.294346 1960071 start.go:364] duration metric: took 50.237µs to acquireMachinesLock for "newest-cni-601829"
	I1217 11:54:34.294369 1960071 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:54:34.294374 1960071 fix.go:54] fixHost starting: 
	I1217 11:54:34.294662 1960071 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:34.312147 1960071 fix.go:112] recreateIfNeeded on newest-cni-601829: state=Stopped err=<nil>
	W1217 11:54:34.312191 1960071 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 11:54:30.834493 1949672 node_ready.go:57] node "default-k8s-diff-port-382022" has "Ready":"False" status (will retry)
	W1217 11:54:32.835255 1949672 node_ready.go:57] node "default-k8s-diff-port-382022" has "Ready":"False" status (will retry)
	I1217 11:54:34.836911 1949672 node_ready.go:49] node "default-k8s-diff-port-382022" is "Ready"
	I1217 11:54:34.836948 1949672 node_ready.go:38] duration metric: took 13.005333229s for node "default-k8s-diff-port-382022" to be "Ready" ...
	I1217 11:54:34.836968 1949672 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:54:34.837023 1949672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:54:34.853509 1949672 api_server.go:72] duration metric: took 13.344424279s to wait for apiserver process to appear ...
	I1217 11:54:34.853561 1949672 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:54:34.853585 1949672 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 11:54:34.859915 1949672 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1217 11:54:34.861053 1949672 api_server.go:141] control plane version: v1.34.3
	I1217 11:54:34.861088 1949672 api_server.go:131] duration metric: took 7.518174ms to wait for apiserver health ...
	I1217 11:54:34.861098 1949672 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:54:34.865585 1949672 system_pods.go:59] 8 kube-system pods found
	I1217 11:54:34.865626 1949672 system_pods.go:61] "coredns-66bc5c9577-8nz5c" [7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:34.865634 1949672 system_pods.go:61] "etcd-default-k8s-diff-port-382022" [89624998-9d7a-46d1-bb95-95d799e1f333] Running
	I1217 11:54:34.865642 1949672 system_pods.go:61] "kindnet-lsrk2" [59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c] Running
	I1217 11:54:34.865658 1949672 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382022" [006f19c1-f459-4182-9d8f-2eade0c6c10e] Running
	I1217 11:54:34.865664 1949672 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382022" [ace736c2-f536-44c9-9bab-69c24a0714c8] Running
	I1217 11:54:34.865671 1949672 system_pods.go:61] "kube-proxy-ss2p8" [d7f7db01-8945-4a8f-aa14-c6f50ac56824] Running
	I1217 11:54:34.865677 1949672 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382022" [703d3040-1a85-4a71-a17e-5043245475fb] Running
	I1217 11:54:34.865684 1949672 system_pods.go:61] "storage-provisioner" [973e9e2c-a15b-4a45-8d2f-955f94325749] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:34.865696 1949672 system_pods.go:74] duration metric: took 4.587743ms to wait for pod list to return data ...
	I1217 11:54:34.865710 1949672 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:54:34.867922 1949672 default_sa.go:45] found service account: "default"
	I1217 11:54:34.867949 1949672 default_sa.go:55] duration metric: took 2.231908ms for default service account to be created ...
	I1217 11:54:34.867960 1949672 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:54:34.871018 1949672 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:34.871049 1949672 system_pods.go:89] "coredns-66bc5c9577-8nz5c" [7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:34.871058 1949672 system_pods.go:89] "etcd-default-k8s-diff-port-382022" [89624998-9d7a-46d1-bb95-95d799e1f333] Running
	I1217 11:54:34.871065 1949672 system_pods.go:89] "kindnet-lsrk2" [59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c] Running
	I1217 11:54:34.871071 1949672 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-382022" [006f19c1-f459-4182-9d8f-2eade0c6c10e] Running
	I1217 11:54:34.871077 1949672 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-382022" [ace736c2-f536-44c9-9bab-69c24a0714c8] Running
	I1217 11:54:34.871120 1949672 system_pods.go:89] "kube-proxy-ss2p8" [d7f7db01-8945-4a8f-aa14-c6f50ac56824] Running
	I1217 11:54:34.871126 1949672 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-382022" [703d3040-1a85-4a71-a17e-5043245475fb] Running
	I1217 11:54:34.871135 1949672 system_pods.go:89] "storage-provisioner" [973e9e2c-a15b-4a45-8d2f-955f94325749] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:34.871192 1949672 retry.go:31] will retry after 203.17763ms: missing components: kube-dns
	I1217 11:54:35.078442 1949672 system_pods.go:86] 8 kube-system pods found
	I1217 11:54:35.078480 1949672 system_pods.go:89] "coredns-66bc5c9577-8nz5c" [7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:54:35.078487 1949672 system_pods.go:89] "etcd-default-k8s-diff-port-382022" [89624998-9d7a-46d1-bb95-95d799e1f333] Running
	I1217 11:54:35.078493 1949672 system_pods.go:89] "kindnet-lsrk2" [59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c] Running
	I1217 11:54:35.078497 1949672 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-382022" [006f19c1-f459-4182-9d8f-2eade0c6c10e] Running
	I1217 11:54:35.078503 1949672 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-382022" [ace736c2-f536-44c9-9bab-69c24a0714c8] Running
	I1217 11:54:35.078508 1949672 system_pods.go:89] "kube-proxy-ss2p8" [d7f7db01-8945-4a8f-aa14-c6f50ac56824] Running
	I1217 11:54:35.078513 1949672 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-382022" [703d3040-1a85-4a71-a17e-5043245475fb] Running
	I1217 11:54:35.078519 1949672 system_pods.go:89] "storage-provisioner" [973e9e2c-a15b-4a45-8d2f-955f94325749] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:54:35.078550 1949672 retry.go:31] will retry after 305.964933ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Dec 17 11:54:22 embed-certs-542273 crio[812]: time="2025-12-17T11:54:22.40356457Z" level=info msg="Starting container: c406c13abc9b1f71e205ddbf87d8f07192a167415d7d74a91b69c9b32309a002" id=4402223d-211b-4c69-af1a-d84ee946c1e9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:54:22 embed-certs-542273 crio[812]: time="2025-12-17T11:54:22.406418537Z" level=info msg="Started container" PID=1932 containerID=c406c13abc9b1f71e205ddbf87d8f07192a167415d7d74a91b69c9b32309a002 description=kube-system/coredns-66bc5c9577-t66bd/coredns id=4402223d-211b-4c69-af1a-d84ee946c1e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=74106f9f712d8534d5141448b162bb6eb5d60fbade150ebfc06a351e4e58b975
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.231060099Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7adda10e-9e9c-4c9b-ab1f-dafd8777adbb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.231157212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.236056916Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2ff7bef95444c4ce25853428db89293ffa95d08313377c9c602fb76817e374c1 UID:93dc2ecf-3c1a-4f60-bd0e-6f961d537d2c NetNS:/var/run/netns/8aca7ab3-58a4-4f80-93a5-dc6f53467408 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000912670}] Aliases:map[]}"
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.236087502Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.245770659Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2ff7bef95444c4ce25853428db89293ffa95d08313377c9c602fb76817e374c1 UID:93dc2ecf-3c1a-4f60-bd0e-6f961d537d2c NetNS:/var/run/netns/8aca7ab3-58a4-4f80-93a5-dc6f53467408 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000912670}] Aliases:map[]}"
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.245911194Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.246713309Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.247494765Z" level=info msg="Ran pod sandbox 2ff7bef95444c4ce25853428db89293ffa95d08313377c9c602fb76817e374c1 with infra container: default/busybox/POD" id=7adda10e-9e9c-4c9b-ab1f-dafd8777adbb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.248850939Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bf15a0de-b8ac-4e8b-81ec-c054ef31aeee name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.248996613Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=bf15a0de-b8ac-4e8b-81ec-c054ef31aeee name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.249031848Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=bf15a0de-b8ac-4e8b-81ec-c054ef31aeee name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.24968539Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c99f5bb3-0da9-4daa-b38f-66bd1834c9a8 name=/runtime.v1.ImageService/PullImage
	Dec 17 11:54:25 embed-certs-542273 crio[812]: time="2025-12-17T11:54:25.251261026Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 11:54:27 embed-certs-542273 crio[812]: time="2025-12-17T11:54:27.158305006Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c99f5bb3-0da9-4daa-b38f-66bd1834c9a8 name=/runtime.v1.ImageService/PullImage
	Dec 17 11:54:27 embed-certs-542273 crio[812]: time="2025-12-17T11:54:27.159022402Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dc6713e4-fade-4a59-b4d1-ea8940b2e352 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:27 embed-certs-542273 crio[812]: time="2025-12-17T11:54:27.160514253Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=390dc969-4066-4e35-9457-eb95544b7f1e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:27 embed-certs-542273 crio[812]: time="2025-12-17T11:54:27.164180307Z" level=info msg="Creating container: default/busybox/busybox" id=13d6efc1-6c93-45aa-8535-640823cd59b4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:27 embed-certs-542273 crio[812]: time="2025-12-17T11:54:27.16430146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:27 embed-certs-542273 crio[812]: time="2025-12-17T11:54:27.168508024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:27 embed-certs-542273 crio[812]: time="2025-12-17T11:54:27.168980461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:27 embed-certs-542273 crio[812]: time="2025-12-17T11:54:27.209946678Z" level=info msg="Created container a1d1f099506306a16307a1dfb5a4fc3ed6bc07229ce71528cdead19082b54832: default/busybox/busybox" id=13d6efc1-6c93-45aa-8535-640823cd59b4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:27 embed-certs-542273 crio[812]: time="2025-12-17T11:54:27.210769752Z" level=info msg="Starting container: a1d1f099506306a16307a1dfb5a4fc3ed6bc07229ce71528cdead19082b54832" id=4aa2b680-fc13-4354-8bd0-4290dcc837d9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:54:27 embed-certs-542273 crio[812]: time="2025-12-17T11:54:27.212617214Z" level=info msg="Started container" PID=2010 containerID=a1d1f099506306a16307a1dfb5a4fc3ed6bc07229ce71528cdead19082b54832 description=default/busybox/busybox id=4aa2b680-fc13-4354-8bd0-4290dcc837d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ff7bef95444c4ce25853428db89293ffa95d08313377c9c602fb76817e374c1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a1d1f09950630       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   2ff7bef95444c       busybox                                      default
	c406c13abc9b1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   74106f9f712d8       coredns-66bc5c9577-t66bd                     kube-system
	9d650cf2dca8e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   e618ef83e5712       storage-provisioner                          kube-system
	3e184dc423a02       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    25 seconds ago      Running             kindnet-cni               0                   8650638ace7a9       kindnet-lvlhs                                kube-system
	5e2201c9c8e62       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      27 seconds ago      Running             kube-proxy                0                   d93531dbea4e9       kube-proxy-gfbw9                             kube-system
	5a49df69c9d20       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      38 seconds ago      Running             kube-apiserver            0                   f688d78f5715f       kube-apiserver-embed-certs-542273            kube-system
	ba70cf78eefd6       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      38 seconds ago      Running             kube-scheduler            0                   9690a974e2fb7       kube-scheduler-embed-certs-542273            kube-system
	8b85a8944f930       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      38 seconds ago      Running             etcd                      0                   f1e867de4153d       etcd-embed-certs-542273                      kube-system
	918f98ac571c3       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      38 seconds ago      Running             kube-controller-manager   0                   f0912a58f5627       kube-controller-manager-embed-certs-542273   kube-system
	
	
	==> coredns [c406c13abc9b1f71e205ddbf87d8f07192a167415d7d74a91b69c9b32309a002] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54092 - 22781 "HINFO IN 5030278037898015103.554171933598155760. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.033116133s
	
	
	==> describe nodes <==
	Name:               embed-certs-542273
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-542273
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=embed-certs-542273
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_54_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:54:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-542273
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:54:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:54:33 +0000   Wed, 17 Dec 2025 11:53:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:54:33 +0000   Wed, 17 Dec 2025 11:53:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:54:33 +0000   Wed, 17 Dec 2025 11:53:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:54:33 +0000   Wed, 17 Dec 2025 11:54:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-542273
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                9ff27ec3-7f97-49af-87a4-abbb0c483315
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-t66bd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-embed-certs-542273                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-lvlhs                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-542273             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-embed-certs-542273    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-gfbw9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-542273             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node embed-certs-542273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node embed-certs-542273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node embed-certs-542273 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node embed-certs-542273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node embed-certs-542273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node embed-certs-542273 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node embed-certs-542273 event: Registered Node embed-certs-542273 in Controller
	  Normal  NodeReady                15s                kubelet          Node embed-certs-542273 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [8b85a8944f93007add090cbfa290abc0d2b44a5bf2596ef80b1e1959300057ed] <==
	{"level":"warn","ts":"2025-12-17T11:53:59.834970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:53:59.845301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:53:59.854406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:53:59.863064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:53:59.872443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:53:59.880315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:53:59.905834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:53:59.914118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:53:59.923150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:00.773766Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.741095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-12-17T11:54:00.773881Z","caller":"traceutil/trace.go:172","msg":"trace[2069168225] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:15; }","duration":"118.857365ms","start":"2025-12-17T11:54:00.654992Z","end":"2025-12-17T11:54:00.773849Z","steps":["trace[2069168225] 'agreement among raft nodes before linearized reading'  (duration: 68.817642ms)","trace[2069168225] 'range keys from in-memory index tree'  (duration: 49.803563ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T11:54:00.773973Z","caller":"traceutil/trace.go:172","msg":"trace[2147007252] transaction","detail":"{read_only:false; response_revision:18; number_of_response:1; }","duration":"116.785531ms","start":"2025-12-17T11:54:00.657171Z","end":"2025-12-17T11:54:00.773956Z","steps":["trace[2147007252] 'process raft request'  (duration: 116.599586ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:54:00.774036Z","caller":"traceutil/trace.go:172","msg":"trace[1193228974] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"111.385975ms","start":"2025-12-17T11:54:00.662611Z","end":"2025-12-17T11:54:00.773997Z","steps":["trace[1193228974] 'process raft request'  (duration: 111.331852ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:54:00.774195Z","caller":"traceutil/trace.go:172","msg":"trace[1725631076] transaction","detail":"{read_only:false; response_revision:19; number_of_response:1; }","duration":"116.267011ms","start":"2025-12-17T11:54:00.657915Z","end":"2025-12-17T11:54:00.774182Z","steps":["trace[1725631076] 'process raft request'  (duration: 115.886189ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:54:00.774245Z","caller":"traceutil/trace.go:172","msg":"trace[1438188310] transaction","detail":"{read_only:false; response_revision:16; number_of_response:1; }","duration":"128.834779ms","start":"2025-12-17T11:54:00.645387Z","end":"2025-12-17T11:54:00.774222Z","steps":["trace[1438188310] 'process raft request'  (duration: 78.444725ms)","trace[1438188310] 'compare'  (duration: 49.752718ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T11:54:00.774331Z","caller":"traceutil/trace.go:172","msg":"trace[1682369126] transaction","detail":"{read_only:false; response_revision:20; number_of_response:1; }","duration":"116.322894ms","start":"2025-12-17T11:54:00.657987Z","end":"2025-12-17T11:54:00.774310Z","steps":["trace[1682369126] 'process raft request'  (duration: 115.834736ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:54:00.774465Z","caller":"traceutil/trace.go:172","msg":"trace[1848716588] transaction","detail":"{read_only:false; response_revision:21; number_of_response:1; }","duration":"116.415828ms","start":"2025-12-17T11:54:00.658041Z","end":"2025-12-17T11:54:00.774456Z","steps":["trace[1848716588] 'process raft request'  (duration: 115.79922ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:54:00.774783Z","caller":"traceutil/trace.go:172","msg":"trace[1892480657] transaction","detail":"{read_only:false; response_revision:22; number_of_response:1; }","duration":"115.917136ms","start":"2025-12-17T11:54:00.658845Z","end":"2025-12-17T11:54:00.774762Z","steps":["trace[1892480657] 'process raft request'  (duration: 115.022909ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:54:00.774902Z","caller":"traceutil/trace.go:172","msg":"trace[1184626794] transaction","detail":"{read_only:false; response_revision:23; number_of_response:1; }","duration":"115.759127ms","start":"2025-12-17T11:54:00.659134Z","end":"2025-12-17T11:54:00.774893Z","steps":["trace[1184626794] 'process raft request'  (duration: 114.770747ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:54:00.775209Z","caller":"traceutil/trace.go:172","msg":"trace[2083567072] transaction","detail":"{read_only:false; response_revision:17; number_of_response:1; }","duration":"118.811254ms","start":"2025-12-17T11:54:00.656388Z","end":"2025-12-17T11:54:00.775200Z","steps":["trace[2083567072] 'process raft request'  (duration: 117.309186ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:54:04.865626Z","caller":"traceutil/trace.go:172","msg":"trace[544407695] transaction","detail":"{read_only:false; response_revision:316; number_of_response:1; }","duration":"123.589547ms","start":"2025-12-17T11:54:04.742017Z","end":"2025-12-17T11:54:04.865607Z","steps":["trace[544407695] 'process raft request'  (duration: 123.454476ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:54:07.145291Z","caller":"traceutil/trace.go:172","msg":"trace[618346963] transaction","detail":"{read_only:false; response_revision:328; number_of_response:1; }","duration":"121.934984ms","start":"2025-12-17T11:54:07.023340Z","end":"2025-12-17T11:54:07.145275Z","steps":["trace[618346963] 'process raft request'  (duration: 121.847247ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T11:54:07.401689Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.002231ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766895231077149 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T11:54:07.401790Z","caller":"traceutil/trace.go:172","msg":"trace[1109234586] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"248.864245ms","start":"2025-12-17T11:54:07.152912Z","end":"2025-12-17T11:54:07.401776Z","steps":["trace[1109234586] 'process raft request'  (duration: 119.321863ms)","trace[1109234586] 'compare'  (duration: 128.891643ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T11:54:07.560943Z","caller":"traceutil/trace.go:172","msg":"trace[826578242] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"148.548768ms","start":"2025-12-17T11:54:07.412378Z","end":"2025-12-17T11:54:07.560927Z","steps":["trace[826578242] 'process raft request'  (duration: 148.427071ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:54:36 up  5:37,  0 user,  load average: 5.66, 3.53, 2.27
	Linux embed-certs-542273 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3e184dc423a02813082f791b899963cfcf724863d1f4ea96420777e8f4b36117] <==
	I1217 11:54:11.248241       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:54:11.340723       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 11:54:11.340922       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:54:11.340951       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:54:11.340977       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:54:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:54:11.543934       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:54:11.543965       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:54:11.543977       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:54:11.544114       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:54:11.941503       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:54:11.941550       1 metrics.go:72] Registering metrics
	I1217 11:54:11.941609       1 controller.go:711] "Syncing nftables rules"
	I1217 11:54:21.550721       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:54:21.550801       1 main.go:301] handling current node
	I1217 11:54:31.546089       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:54:31.546123       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a49df69c9d20b7f59ffef28cf0b5e3832bfb82471e1cc684547287a841425e6] <==
	E1217 11:54:00.581640       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1217 11:54:00.623925       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:54:00.655054       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 11:54:00.655310       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:00.775475       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:00.781594       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:54:00.782382       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 11:54:01.428587       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 11:54:01.434382       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 11:54:01.434402       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:54:02.027478       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:54:02.068407       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:54:02.134658       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 11:54:02.141757       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1217 11:54:02.143231       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:54:02.148128       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:54:02.444477       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:54:02.970110       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:54:02.981822       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 11:54:02.997445       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 11:54:07.750359       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:07.778924       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:08.197455       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 11:54:08.499045       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1217 11:54:35.026875       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:56796: use of closed network connection
	
	
	==> kube-controller-manager [918f98ac571c3d523a6aa9d45194d85165f76bd5cadfe573e319f8528fbe5b5a] <==
	I1217 11:54:07.643919       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 11:54:07.643982       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 11:54:07.643985       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 11:54:07.644333       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 11:54:07.645146       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 11:54:07.645209       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 11:54:07.645232       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 11:54:07.647677       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 11:54:07.648393       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 11:54:07.649587       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 11:54:07.653910       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 11:54:07.662192       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 11:54:07.663393       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 11:54:07.665611       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 11:54:07.666853       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 11:54:07.667474       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 11:54:07.668518       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 11:54:07.674237       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:54:07.674237       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 11:54:07.674857       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 11:54:07.686002       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:54:07.693837       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:54:07.693857       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 11:54:07.693863       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 11:54:22.610512       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5e2201c9c8e62bbee0269adb22ea1d6ba77574c3cd7f5e8cc0d8d2f401769f62] <==
	I1217 11:54:08.720869       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:54:08.851968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:54:08.952498       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:54:08.952665       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 11:54:08.952825       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:54:08.987645       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:54:08.987770       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:54:08.996441       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:54:08.996922       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:54:08.997004       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:08.998931       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:54:08.999102       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:54:08.999194       1 config.go:309] "Starting node config controller"
	I1217 11:54:08.999210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:54:08.999222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:54:08.999373       1 config.go:200] "Starting service config controller"
	I1217 11:54:08.999379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:54:08.999395       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:54:08.999400       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:54:09.100629       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:54:09.102614       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:54:09.102635       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ba70cf78eefd6d1cd6820b7512a2e298101a199261a2f288e3c95b9284e786ea] <==
	E1217 11:54:00.493697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 11:54:00.493767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 11:54:00.493813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:54:00.493903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 11:54:00.489727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 11:54:00.493979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 11:54:00.495674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:54:00.496084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 11:54:01.303453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:54:01.325785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 11:54:01.340083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:54:01.342143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 11:54:01.376702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 11:54:01.387762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:54:01.424305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 11:54:01.435953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 11:54:01.488195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 11:54:01.506369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:54:01.631007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 11:54:01.743773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 11:54:01.751167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 11:54:01.818346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 11:54:01.833451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 11:54:01.857840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1217 11:54:04.880671       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:54:03 embed-certs-542273 kubelet[1352]: I1217 11:54:03.956526    1352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-542273" podStartSLOduration=1.956493999 podStartE2EDuration="1.956493999s" podCreationTimestamp="2025-12-17 11:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:03.956406265 +0000 UTC m=+1.183270870" watchObservedRunningTime="2025-12-17 11:54:03.956493999 +0000 UTC m=+1.183358608"
	Dec 17 11:54:03 embed-certs-542273 kubelet[1352]: I1217 11:54:03.994587    1352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-542273" podStartSLOduration=1.9945286709999999 podStartE2EDuration="1.994528671s" podCreationTimestamp="2025-12-17 11:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:03.976523137 +0000 UTC m=+1.203387745" watchObservedRunningTime="2025-12-17 11:54:03.994528671 +0000 UTC m=+1.221393277"
	Dec 17 11:54:04 embed-certs-542273 kubelet[1352]: I1217 11:54:04.008200    1352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-542273" podStartSLOduration=2.008174193 podStartE2EDuration="2.008174193s" podCreationTimestamp="2025-12-17 11:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:03.995095989 +0000 UTC m=+1.221960597" watchObservedRunningTime="2025-12-17 11:54:04.008174193 +0000 UTC m=+1.235038802"
	Dec 17 11:54:04 embed-certs-542273 kubelet[1352]: I1217 11:54:04.022516    1352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-542273" podStartSLOduration=2.022270988 podStartE2EDuration="2.022270988s" podCreationTimestamp="2025-12-17 11:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:04.008337211 +0000 UTC m=+1.235201815" watchObservedRunningTime="2025-12-17 11:54:04.022270988 +0000 UTC m=+1.249135596"
	Dec 17 11:54:07 embed-certs-542273 kubelet[1352]: I1217 11:54:07.640872    1352 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 11:54:07 embed-certs-542273 kubelet[1352]: I1217 11:54:07.641841    1352 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 11:54:08 embed-certs-542273 kubelet[1352]: I1217 11:54:08.289197    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a-xtables-lock\") pod \"kindnet-lvlhs\" (UID: \"79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a\") " pod="kube-system/kindnet-lvlhs"
	Dec 17 11:54:08 embed-certs-542273 kubelet[1352]: I1217 11:54:08.289235    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a-lib-modules\") pod \"kindnet-lvlhs\" (UID: \"79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a\") " pod="kube-system/kindnet-lvlhs"
	Dec 17 11:54:08 embed-certs-542273 kubelet[1352]: I1217 11:54:08.289255    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5kpk\" (UniqueName: \"kubernetes.io/projected/79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a-kube-api-access-x5kpk\") pod \"kindnet-lvlhs\" (UID: \"79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a\") " pod="kube-system/kindnet-lvlhs"
	Dec 17 11:54:08 embed-certs-542273 kubelet[1352]: I1217 11:54:08.289276    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/409200b4-d7e2-4aa0-87f9-64c6f73e93c5-kube-proxy\") pod \"kube-proxy-gfbw9\" (UID: \"409200b4-d7e2-4aa0-87f9-64c6f73e93c5\") " pod="kube-system/kube-proxy-gfbw9"
	Dec 17 11:54:08 embed-certs-542273 kubelet[1352]: I1217 11:54:08.289303    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/409200b4-d7e2-4aa0-87f9-64c6f73e93c5-lib-modules\") pod \"kube-proxy-gfbw9\" (UID: \"409200b4-d7e2-4aa0-87f9-64c6f73e93c5\") " pod="kube-system/kube-proxy-gfbw9"
	Dec 17 11:54:08 embed-certs-542273 kubelet[1352]: I1217 11:54:08.289325    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a-cni-cfg\") pod \"kindnet-lvlhs\" (UID: \"79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a\") " pod="kube-system/kindnet-lvlhs"
	Dec 17 11:54:08 embed-certs-542273 kubelet[1352]: I1217 11:54:08.289347    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/409200b4-d7e2-4aa0-87f9-64c6f73e93c5-xtables-lock\") pod \"kube-proxy-gfbw9\" (UID: \"409200b4-d7e2-4aa0-87f9-64c6f73e93c5\") " pod="kube-system/kube-proxy-gfbw9"
	Dec 17 11:54:08 embed-certs-542273 kubelet[1352]: I1217 11:54:08.289374    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw6f9\" (UniqueName: \"kubernetes.io/projected/409200b4-d7e2-4aa0-87f9-64c6f73e93c5-kube-api-access-gw6f9\") pod \"kube-proxy-gfbw9\" (UID: \"409200b4-d7e2-4aa0-87f9-64c6f73e93c5\") " pod="kube-system/kube-proxy-gfbw9"
	Dec 17 11:54:09 embed-certs-542273 kubelet[1352]: I1217 11:54:09.542590    1352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gfbw9" podStartSLOduration=1.542567398 podStartE2EDuration="1.542567398s" podCreationTimestamp="2025-12-17 11:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:08.935701618 +0000 UTC m=+6.162566353" watchObservedRunningTime="2025-12-17 11:54:09.542567398 +0000 UTC m=+6.769432005"
	Dec 17 11:54:11 embed-certs-542273 kubelet[1352]: I1217 11:54:11.972590    1352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lvlhs" podStartSLOduration=1.5891723180000001 podStartE2EDuration="3.972564714s" podCreationTimestamp="2025-12-17 11:54:08 +0000 UTC" firstStartedPulling="2025-12-17 11:54:08.546863511 +0000 UTC m=+5.773728111" lastFinishedPulling="2025-12-17 11:54:10.930255907 +0000 UTC m=+8.157120507" observedRunningTime="2025-12-17 11:54:11.940812744 +0000 UTC m=+9.167677352" watchObservedRunningTime="2025-12-17 11:54:11.972564714 +0000 UTC m=+9.199429323"
	Dec 17 11:54:21 embed-certs-542273 kubelet[1352]: I1217 11:54:21.995813    1352 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 11:54:22 embed-certs-542273 kubelet[1352]: I1217 11:54:22.094775    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwkml\" (UniqueName: \"kubernetes.io/projected/12ccdad4-eb85-447a-b66a-5b9df90b40e4-kube-api-access-mwkml\") pod \"coredns-66bc5c9577-t66bd\" (UID: \"12ccdad4-eb85-447a-b66a-5b9df90b40e4\") " pod="kube-system/coredns-66bc5c9577-t66bd"
	Dec 17 11:54:22 embed-certs-542273 kubelet[1352]: I1217 11:54:22.094829    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/88cd3e31-ccf4-442e-9f0e-e1abc10069b5-tmp\") pod \"storage-provisioner\" (UID: \"88cd3e31-ccf4-442e-9f0e-e1abc10069b5\") " pod="kube-system/storage-provisioner"
	Dec 17 11:54:22 embed-certs-542273 kubelet[1352]: I1217 11:54:22.094852    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12ccdad4-eb85-447a-b66a-5b9df90b40e4-config-volume\") pod \"coredns-66bc5c9577-t66bd\" (UID: \"12ccdad4-eb85-447a-b66a-5b9df90b40e4\") " pod="kube-system/coredns-66bc5c9577-t66bd"
	Dec 17 11:54:22 embed-certs-542273 kubelet[1352]: I1217 11:54:22.094970    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hvnj\" (UniqueName: \"kubernetes.io/projected/88cd3e31-ccf4-442e-9f0e-e1abc10069b5-kube-api-access-6hvnj\") pod \"storage-provisioner\" (UID: \"88cd3e31-ccf4-442e-9f0e-e1abc10069b5\") " pod="kube-system/storage-provisioner"
	Dec 17 11:54:22 embed-certs-542273 kubelet[1352]: I1217 11:54:22.985439    1352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t66bd" podStartSLOduration=14.985412732 podStartE2EDuration="14.985412732s" podCreationTimestamp="2025-12-17 11:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:22.972890798 +0000 UTC m=+20.199755406" watchObservedRunningTime="2025-12-17 11:54:22.985412732 +0000 UTC m=+20.212277340"
	Dec 17 11:54:23 embed-certs-542273 kubelet[1352]: I1217 11:54:23.000253    1352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.000225601 podStartE2EDuration="14.000225601s" podCreationTimestamp="2025-12-17 11:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:22.986174888 +0000 UTC m=+20.213039496" watchObservedRunningTime="2025-12-17 11:54:23.000225601 +0000 UTC m=+20.227090208"
	Dec 17 11:54:25 embed-certs-542273 kubelet[1352]: I1217 11:54:25.013428    1352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxkt\" (UniqueName: \"kubernetes.io/projected/93dc2ecf-3c1a-4f60-bd0e-6f961d537d2c-kube-api-access-cwxkt\") pod \"busybox\" (UID: \"93dc2ecf-3c1a-4f60-bd0e-6f961d537d2c\") " pod="default/busybox"
	Dec 17 11:54:27 embed-certs-542273 kubelet[1352]: I1217 11:54:27.992013    1352 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.08129866 podStartE2EDuration="3.991991713s" podCreationTimestamp="2025-12-17 11:54:24 +0000 UTC" firstStartedPulling="2025-12-17 11:54:25.249259564 +0000 UTC m=+22.476124156" lastFinishedPulling="2025-12-17 11:54:27.159952604 +0000 UTC m=+24.386817209" observedRunningTime="2025-12-17 11:54:27.991946215 +0000 UTC m=+25.218810822" watchObservedRunningTime="2025-12-17 11:54:27.991991713 +0000 UTC m=+25.218856321"
	
	
	==> storage-provisioner [9d650cf2dca8e169fd124a2cf64fd47ff6310ceb75604bcff0775aa53f3889d9] <==
	I1217 11:54:22.412420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:54:22.421169       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:54:22.421252       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 11:54:22.423645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:22.429100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:54:22.429235       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:54:22.429466       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-542273_22120f19-51a5-44c0-9dcc-f4e88631e3c3!
	I1217 11:54:22.429367       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8cbc9e0-f980-443c-9469-43664e3fa9a6", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-542273_22120f19-51a5-44c0-9dcc-f4e88631e3c3 became leader
	W1217 11:54:22.431849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:22.435190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:54:22.530715       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-542273_22120f19-51a5-44c0-9dcc-f4e88631e3c3!
	W1217 11:54:24.438734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:24.443223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:26.446889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:26.452296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:28.455714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:28.460017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:30.463252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:30.469813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:32.474049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:32.478796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:34.482833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:34.486911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:36.490786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:36.496472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-542273 -n embed-certs-542273
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-542273 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-601829 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-601829 --alsologtostderr -v=1: exit status 80 (2.295428195s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-601829 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:54:44.988188 1964113 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:54:44.988311 1964113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:44.988320 1964113 out.go:374] Setting ErrFile to fd 2...
	I1217 11:54:44.988324 1964113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:44.988499 1964113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:54:44.988752 1964113 out.go:368] Setting JSON to false
	I1217 11:54:44.988774 1964113 mustload.go:66] Loading cluster: newest-cni-601829
	I1217 11:54:44.989121 1964113 config.go:182] Loaded profile config "newest-cni-601829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:44.989528 1964113 cli_runner.go:164] Run: docker container inspect newest-cni-601829 --format={{.State.Status}}
	I1217 11:54:45.008598 1964113 host.go:66] Checking if "newest-cni-601829" exists ...
	I1217 11:54:45.008898 1964113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:45.071902 1964113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-17 11:54:45.06016649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:45.072765 1964113 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-601829 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 11:54:45.074860 1964113 out.go:179] * Pausing node newest-cni-601829 ... 
	I1217 11:54:45.076377 1964113 host.go:66] Checking if "newest-cni-601829" exists ...
	I1217 11:54:45.076704 1964113 ssh_runner.go:195] Run: systemctl --version
	I1217 11:54:45.076756 1964113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-601829
	I1217 11:54:45.096992 1964113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/newest-cni-601829/id_rsa Username:docker}
	I1217 11:54:45.191706 1964113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:54:45.205100 1964113 pause.go:52] kubelet running: true
	I1217 11:54:45.205182 1964113 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:54:45.330763 1964113 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:54:45.330879 1964113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:54:45.405031 1964113 cri.go:89] found id: "c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50"
	I1217 11:54:45.405058 1964113 cri.go:89] found id: "669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13"
	I1217 11:54:45.405064 1964113 cri.go:89] found id: "083255864c4b6da94f3364ec07f1239b62d744e32359ba7f306864571bec5781"
	I1217 11:54:45.405069 1964113 cri.go:89] found id: "deb114159803887ef4c8f5a7bf03aff5659b8ba6236301758a913b5fb2be9360"
	I1217 11:54:45.405073 1964113 cri.go:89] found id: "e8bbfa1edf5382c6ee1addad89bb4987d033da47ffe9ba3451542d30f8a2ba20"
	I1217 11:54:45.405078 1964113 cri.go:89] found id: "9b48e50a2ca11fa0201aa0e2c85a506d04ec5e8272efea4ebf4b314a2777a9a4"
	I1217 11:54:45.405083 1964113 cri.go:89] found id: ""
	I1217 11:54:45.405125 1964113 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:54:45.417787 1964113 retry.go:31] will retry after 150.479426ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:54:45Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:54:45.569250 1964113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:54:45.583193 1964113 pause.go:52] kubelet running: false
	I1217 11:54:45.583264 1964113 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:54:45.714678 1964113 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:54:45.714762 1964113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:54:45.787626 1964113 cri.go:89] found id: "c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50"
	I1217 11:54:45.787647 1964113 cri.go:89] found id: "669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13"
	I1217 11:54:45.787653 1964113 cri.go:89] found id: "083255864c4b6da94f3364ec07f1239b62d744e32359ba7f306864571bec5781"
	I1217 11:54:45.787658 1964113 cri.go:89] found id: "deb114159803887ef4c8f5a7bf03aff5659b8ba6236301758a913b5fb2be9360"
	I1217 11:54:45.787668 1964113 cri.go:89] found id: "e8bbfa1edf5382c6ee1addad89bb4987d033da47ffe9ba3451542d30f8a2ba20"
	I1217 11:54:45.787674 1964113 cri.go:89] found id: "9b48e50a2ca11fa0201aa0e2c85a506d04ec5e8272efea4ebf4b314a2777a9a4"
	I1217 11:54:45.787679 1964113 cri.go:89] found id: ""
	I1217 11:54:45.787720 1964113 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:54:45.804954 1964113 retry.go:31] will retry after 369.385514ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:54:45Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:54:46.174621 1964113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:54:46.189409 1964113 pause.go:52] kubelet running: false
	I1217 11:54:46.189516 1964113 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:54:46.315082 1964113 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:54:46.315174 1964113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:54:46.395414 1964113 cri.go:89] found id: "c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50"
	I1217 11:54:46.395433 1964113 cri.go:89] found id: "669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13"
	I1217 11:54:46.395438 1964113 cri.go:89] found id: "083255864c4b6da94f3364ec07f1239b62d744e32359ba7f306864571bec5781"
	I1217 11:54:46.395443 1964113 cri.go:89] found id: "deb114159803887ef4c8f5a7bf03aff5659b8ba6236301758a913b5fb2be9360"
	I1217 11:54:46.395447 1964113 cri.go:89] found id: "e8bbfa1edf5382c6ee1addad89bb4987d033da47ffe9ba3451542d30f8a2ba20"
	I1217 11:54:46.395452 1964113 cri.go:89] found id: "9b48e50a2ca11fa0201aa0e2c85a506d04ec5e8272efea4ebf4b314a2777a9a4"
	I1217 11:54:46.395455 1964113 cri.go:89] found id: ""
	I1217 11:54:46.395510 1964113 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:54:46.408405 1964113 retry.go:31] will retry after 550.03474ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:54:46Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:54:46.958890 1964113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:54:46.974242 1964113 pause.go:52] kubelet running: false
	I1217 11:54:46.974310 1964113 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:54:47.104307 1964113 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:54:47.104380 1964113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:54:47.190025 1964113 cri.go:89] found id: "c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50"
	I1217 11:54:47.190081 1964113 cri.go:89] found id: "669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13"
	I1217 11:54:47.190089 1964113 cri.go:89] found id: "083255864c4b6da94f3364ec07f1239b62d744e32359ba7f306864571bec5781"
	I1217 11:54:47.190095 1964113 cri.go:89] found id: "deb114159803887ef4c8f5a7bf03aff5659b8ba6236301758a913b5fb2be9360"
	I1217 11:54:47.190100 1964113 cri.go:89] found id: "e8bbfa1edf5382c6ee1addad89bb4987d033da47ffe9ba3451542d30f8a2ba20"
	I1217 11:54:47.190104 1964113 cri.go:89] found id: "9b48e50a2ca11fa0201aa0e2c85a506d04ec5e8272efea4ebf4b314a2777a9a4"
	I1217 11:54:47.190109 1964113 cri.go:89] found id: ""
	I1217 11:54:47.190172 1964113 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:54:47.205514 1964113 out.go:203] 
	W1217 11:54:47.206941 1964113 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:54:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:54:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:54:47.206963 1964113 out.go:285] * 
	* 
	W1217 11:54:47.214135 1964113 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:54:47.215647 1964113 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-601829 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-601829
helpers_test.go:244: (dbg) docker inspect newest-cni-601829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e",
	        "Created": "2025-12-17T11:54:07.887432598Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1960279,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:54:34.339666939Z",
	            "FinishedAt": "2025-12-17T11:54:33.414946624Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/hosts",
	        "LogPath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e-json.log",
	        "Name": "/newest-cni-601829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-601829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-601829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e",
	                "LowerDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-601829",
	                "Source": "/var/lib/docker/volumes/newest-cni-601829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-601829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-601829",
	                "name.minikube.sigs.k8s.io": "newest-cni-601829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e618c87a359b5ed2b957ced4ca9d9e18363cea85094389e89601927a68916387",
	            "SandboxKey": "/var/run/docker/netns/e618c87a359b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34621"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34622"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34625"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34623"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34624"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-601829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba75054aca4fb8ab88e7766d66917111b8a98c9b6621d8d4536b729c295e0bd7",
	                    "EndpointID": "d0f8cf59792410181f71c9f68f260caba01888bfbdf4a8c63be8687dc12c0cb1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7a:72:8f:36:e9:1c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-601829",
	                        "0771ab9e37be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-601829 -n newest-cni-601829
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-601829 -n newest-cni-601829: exit status 2 (348.484964ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-601829 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-601829 logs -n 25: (1.05591747s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ image   │ old-k8s-version-401285 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ pause   │ -p old-k8s-version-401285 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p kubernetes-upgrade-556754                                                                                                                                                                                                                       │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p disable-driver-mounts-618082                                                                                                                                                                                                                    │ disable-driver-mounts-618082 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ delete  │ -p stopped-upgrade-287611                                                                                                                                                                                                                          │ stopped-upgrade-287611       │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p no-preload-737478 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p newest-cni-601829 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-601829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p embed-certs-542273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p embed-certs-542273 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-737478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ image   │ newest-cni-601829 image list --format=json                                                                                                                                                                                                         │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ pause   │ -p newest-cni-601829 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:54:43
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:54:43.372608 1963245 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:54:43.372929 1963245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:43.372940 1963245 out.go:374] Setting ErrFile to fd 2...
	I1217 11:54:43.372945 1963245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:43.373189 1963245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:54:43.373694 1963245 out.go:368] Setting JSON to false
	I1217 11:54:43.374972 1963245 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20228,"bootTime":1765952255,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:54:43.375046 1963245 start.go:143] virtualization: kvm guest
	I1217 11:54:43.376929 1963245 out.go:179] * [no-preload-737478] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:54:43.378750 1963245 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:54:43.378809 1963245 notify.go:221] Checking for updates...
	I1217 11:54:43.381007 1963245 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:54:43.382275 1963245 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:43.383341 1963245 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:54:43.384421 1963245 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:54:43.385472 1963245 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:54:43.386902 1963245 config.go:182] Loaded profile config "no-preload-737478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:43.387464 1963245 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:54:43.411926 1963245 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:54:43.412056 1963245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:43.468520 1963245 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-17 11:54:43.458780495 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:43.468675 1963245 docker.go:319] overlay module found
	I1217 11:54:43.470781 1963245 out.go:179] * Using the docker driver based on existing profile
	I1217 11:54:43.472215 1963245 start.go:309] selected driver: docker
	I1217 11:54:43.472233 1963245 start.go:927] validating driver "docker" against &{Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:43.472336 1963245 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:54:43.472956 1963245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:43.531701 1963245 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-17 11:54:43.520989371 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:43.532107 1963245 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:54:43.532155 1963245 cni.go:84] Creating CNI manager for ""
	I1217 11:54:43.532242 1963245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:43.532294 1963245 start.go:353] cluster config:
	{Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:43.534200 1963245 out.go:179] * Starting "no-preload-737478" primary control-plane node in "no-preload-737478" cluster
	I1217 11:54:43.535375 1963245 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:54:43.536573 1963245 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:54:43.537896 1963245 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:54:43.537991 1963245 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:54:43.538063 1963245 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/no-preload-737478/config.json ...
	I1217 11:54:43.538233 1963245 cache.go:107] acquiring lock: {Name:mkce365350b466caa625a853fa04d355dafaf737 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.538872 1963245 cache.go:107] acquiring lock: {Name:mk9b11255ca4aa317635277ae364f17e3f34e430 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.538880 1963245 cache.go:107] acquiring lock: {Name:mka9f0fd2d6e879a6d51520f3e35096f83561a39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.538926 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 11:54:43.538942 1963245 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 726.203µs
	I1217 11:54:43.538965 1963245 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 11:54:43.538585 1963245 cache.go:107] acquiring lock: {Name:mk195f08cb3604d752263934a40f27bac4021dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539005 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1217 11:54:43.538743 1963245 cache.go:107] acquiring lock: {Name:mkb34fd803350485ad0146dad2d5e5975c7a1fbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539019 1963245 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 146.962µs
	I1217 11:54:43.538704 1963245 cache.go:107] acquiring lock: {Name:mka6d3f4b4fc66993c428fbcff6e92cde119967c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539038 1963245 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539020 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1217 11:54:43.539068 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1217 11:54:43.538251 1963245 cache.go:107] acquiring lock: {Name:mk6a07e7ceeb8fe04825f0802eeaaeeee4c06443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539076 1963245 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 336.63µs
	I1217 11:54:43.539084 1963245 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539069 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1217 11:54:43.539097 1963245 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 529.938µs
	I1217 11:54:43.539105 1963245 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1217 11:54:43.539087 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 11:54:43.539123 1963245 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 423.999µs
	I1217 11:54:43.539130 1963245 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 11:54:43.539054 1963245 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 813µs
	I1217 11:54:43.539186 1963245 cache.go:107] acquiring lock: {Name:mk69f66d091b3517cc19ba9a659d980495d072d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539224 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1217 11:54:43.539238 1963245 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539239 1963245 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.001705ms
	I1217 11:54:43.539253 1963245 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539271 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1217 11:54:43.539280 1963245 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 507.252µs
	I1217 11:54:43.539289 1963245 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 11:54:43.539299 1963245 cache.go:87] Successfully saved all images to host disk.
	I1217 11:54:43.563571 1963245 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:54:43.563593 1963245 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:54:43.563614 1963245 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:54:43.563675 1963245 start.go:360] acquireMachinesLock for no-preload-737478: {Name:mk1ef5e7ed91896001178c3ee81911e4005528d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.563747 1963245 start.go:364] duration metric: took 49.755µs to acquireMachinesLock for "no-preload-737478"
	I1217 11:54:43.563771 1963245 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:54:43.563781 1963245 fix.go:54] fixHost starting: 
	I1217 11:54:43.564060 1963245 cli_runner.go:164] Run: docker container inspect no-preload-737478 --format={{.State.Status}}
	I1217 11:54:43.586018 1963245 fix.go:112] recreateIfNeeded on no-preload-737478: state=Stopped err=<nil>
	W1217 11:54:43.586071 1963245 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 11:54:43.272638 1960071 addons.go:530] duration metric: took 1.853650188s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 11:54:43.754696 1960071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:54:43.760260 1960071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 11:54:43.760297 1960071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 11:54:44.253919 1960071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:54:44.258064 1960071 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 11:54:44.259107 1960071 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 11:54:44.259132 1960071 api_server.go:131] duration metric: took 1.005429065s to wait for apiserver health ...
	I1217 11:54:44.259143 1960071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:54:44.262904 1960071 system_pods.go:59] 8 kube-system pods found
	I1217 11:54:44.262947 1960071 system_pods.go:61] "coredns-7d764666f9-jwmxw" [1daf4bf2-080a-49a2-ad9f-fea9cdbc571b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 11:54:44.262958 1960071 system_pods.go:61] "etcd-newest-cni-601829" [d71be3a5-4bd0-47e7-98ea-b50d6c2abd0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:54:44.262966 1960071 system_pods.go:61] "kindnet-t6q5x" [6c3deb88-31c5-4008-aae7-7467aa3f9e81] Running
	I1217 11:54:44.262975 1960071 system_pods.go:61] "kube-apiserver-newest-cni-601829" [eb175f99-213c-4663-bbf7-43c54202dbba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:54:44.262983 1960071 system_pods.go:61] "kube-controller-manager-newest-cni-601829" [f9d7a310-c545-49de-9def-714ba54d3bbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:54:44.262990 1960071 system_pods.go:61] "kube-proxy-grz2c" [35f43b51-b45f-4c1c-a95f-3a34192b4334] Running
	I1217 11:54:44.262999 1960071 system_pods.go:61] "kube-scheduler-newest-cni-601829" [79ecb056-ebc4-4c51-85a4-727a2d633751] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:54:44.263008 1960071 system_pods.go:61] "storage-provisioner" [3e2c9b6f-d0cc-48bc-ba8d-6da58cb1968d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 11:54:44.263024 1960071 system_pods.go:74] duration metric: took 3.874548ms to wait for pod list to return data ...
	I1217 11:54:44.263034 1960071 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:54:44.265253 1960071 default_sa.go:45] found service account: "default"
	I1217 11:54:44.265273 1960071 default_sa.go:55] duration metric: took 2.232696ms for default service account to be created ...
	I1217 11:54:44.265286 1960071 kubeadm.go:587] duration metric: took 2.84630377s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:54:44.265307 1960071 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:54:44.267523 1960071 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:54:44.267569 1960071 node_conditions.go:123] node cpu capacity is 8
	I1217 11:54:44.267608 1960071 node_conditions.go:105] duration metric: took 2.288058ms to run NodePressure ...
	I1217 11:54:44.267622 1960071 start.go:242] waiting for startup goroutines ...
	I1217 11:54:44.267631 1960071 start.go:247] waiting for cluster config update ...
	I1217 11:54:44.267642 1960071 start.go:256] writing updated cluster config ...
	I1217 11:54:44.267881 1960071 ssh_runner.go:195] Run: rm -f paused
	I1217 11:54:44.316576 1960071 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 11:54:44.318790 1960071 out.go:179] * Done! kubectl is now configured to use "newest-cni-601829" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.021396906Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.026772111Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bdd6835e-156f-4ae9-8b29-aa91ce74d991 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.02869128Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=118f8ff2-129d-4b86-aeb0-c4255375e07a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.029965602Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.031910252Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.03289422Z" level=info msg="Ran pod sandbox fc82392128bcecba2201e5c268cdfef41da1505af1a4393286d259dfdde611fe with infra container: kube-system/kube-proxy-grz2c/POD" id=118f8ff2-129d-4b86-aeb0-c4255375e07a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.033459481Z" level=info msg="Ran pod sandbox 1af2e4acee0190491c2bd7e8767ed683ae8d5018c337248b6e7d6c2bdc3d6e7f with infra container: kube-system/kindnet-t6q5x/POD" id=bdd6835e-156f-4ae9-8b29-aa91ce74d991 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.035396513Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=98e5f9b4-396b-48de-9e25-bc329e4e8174 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.036061662Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=aaac7f44-e2c5-4007-9997-06c14434201e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.037054718Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=6b00cbc5-ce8c-425e-bb7d-05ec77caf608 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.037796165Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b4345a41-6ce7-42d1-a9dd-1ff164e0b354 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.038977552Z" level=info msg="Creating container: kube-system/kube-proxy-grz2c/kube-proxy" id=4baa61dd-27a2-4edc-bcfa-2989baad0b9b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.039120529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.039526089Z" level=info msg="Creating container: kube-system/kindnet-t6q5x/kindnet-cni" id=06beef90-4807-4001-9fca-e111a9abcf4f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.039664222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.046167842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.047021701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.047683688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.047028388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.103371539Z" level=info msg="Created container c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50: kube-system/kindnet-t6q5x/kindnet-cni" id=06beef90-4807-4001-9fca-e111a9abcf4f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.104246859Z" level=info msg="Starting container: c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50" id=556bf8d3-aecf-4bad-ab36-7ba1b0789cc3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.107495818Z" level=info msg="Started container" PID=1101 containerID=c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50 description=kube-system/kindnet-t6q5x/kindnet-cni id=556bf8d3-aecf-4bad-ab36-7ba1b0789cc3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1af2e4acee0190491c2bd7e8767ed683ae8d5018c337248b6e7d6c2bdc3d6e7f
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.111144184Z" level=info msg="Created container 669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13: kube-system/kube-proxy-grz2c/kube-proxy" id=4baa61dd-27a2-4edc-bcfa-2989baad0b9b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.112064589Z" level=info msg="Starting container: 669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13" id=0aafee8c-0e15-4adb-98b5-2fae5301fd84 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.118903066Z" level=info msg="Started container" PID=1102 containerID=669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13 description=kube-system/kube-proxy-grz2c/kube-proxy id=0aafee8c-0e15-4adb-98b5-2fae5301fd84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc82392128bcecba2201e5c268cdfef41da1505af1a4393286d259dfdde611fe
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c271c59be6e17       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   5 seconds ago       Running             kindnet-cni               1                   1af2e4acee019       kindnet-t6q5x                               kube-system
	669d3a2c60805       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   5 seconds ago       Running             kube-proxy                1                   fc82392128bce       kube-proxy-grz2c                            kube-system
	083255864c4b6       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   7 seconds ago       Running             kube-apiserver            1                   1371def546ba5       kube-apiserver-newest-cni-601829            kube-system
	deb1141598038       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   7 seconds ago       Running             kube-controller-manager   1                   562d3a1e8e939       kube-controller-manager-newest-cni-601829   kube-system
	e8bbfa1edf538       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   7 seconds ago       Running             kube-scheduler            1                   5e050b7475883       kube-scheduler-newest-cni-601829            kube-system
	9b48e50a2ca11       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   7 seconds ago       Running             etcd                      1                   9a5e7ab55d679       etcd-newest-cni-601829                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-601829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-601829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=newest-cni-601829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_54_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:54:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-601829
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:54:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:54:42 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:54:42 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:54:42 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 11:54:42 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-601829
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                bf12f87d-6e9a-4666-9ac7-1005cb2f7e7a
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-601829                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-t6q5x                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-601829             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-601829    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-grz2c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-601829             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  22s   node-controller  Node newest-cni-601829 event: Registered Node newest-cni-601829 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-601829 event: Registered Node newest-cni-601829 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [9b48e50a2ca11fa0201aa0e2c85a506d04ec5e8272efea4ebf4b314a2777a9a4] <==
	{"level":"info","ts":"2025-12-17T11:54:41.327990Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T11:54:41.328034Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T11:54:41.328088Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-17T11:54:41.328245Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T11:54:41.328390Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T11:54:41.328451Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T11:54:41.328506Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T11:54:41.617773Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:41.617837Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:41.617945Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:41.617970Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:54:41.617991Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:41.618794Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:41.618825Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:54:41.618847Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:41.618856Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:41.620250Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-601829 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T11:54:41.620254Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:54:41.620281Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:54:41.620491Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T11:54:41.620717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T11:54:41.621959Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:54:41.622399Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:54:41.627274Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T11:54:41.627347Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 11:54:48 up  5:37,  0 user,  load average: 4.86, 3.43, 2.25
	Linux newest-cni-601829 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50] <==
	I1217 11:54:43.348023       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:54:43.348342       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 11:54:43.348502       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:54:43.348527       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:54:43.348587       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:54:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:54:43.643399       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:54:43.643440       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:54:43.643457       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:54:43.643635       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:54:44.044026       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:54:44.044058       1 metrics.go:72] Registering metrics
	I1217 11:54:44.044177       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [083255864c4b6da94f3364ec07f1239b62d744e32359ba7f306864571bec5781] <==
	I1217 11:54:42.643820       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 11:54:42.644111       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:42.644130       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 11:54:42.644142       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:42.644184       1 aggregator.go:187] initial CRD sync complete...
	I1217 11:54:42.644199       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 11:54:42.644205       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:54:42.644212       1 cache.go:39] Caches are synced for autoregister controller
	E1217 11:54:42.650328       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 11:54:42.651217       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 11:54:42.672052       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:42.678414       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:54:42.910925       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:54:42.910925       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:54:42.961368       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:54:43.009765       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:54:43.045035       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:54:43.075904       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:54:43.160096       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.159.103"}
	I1217 11:54:43.172300       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.57.221"}
	I1217 11:54:43.546771       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 11:54:46.283733       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:54:46.384290       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:54:46.434087       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:54:46.484639       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [deb114159803887ef4c8f5a7bf03aff5659b8ba6236301758a913b5fb2be9360] <==
	I1217 11:54:45.796323       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.796439       1 range_allocator.go:177] "Sending events to api server"
	I1217 11:54:45.796510       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 11:54:45.796570       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:45.796602       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.797162       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.797196       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.797343       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.797382       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.799276       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.799383       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 11:54:45.799492       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-601829"
	I1217 11:54:45.799585       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 11:54:45.800871       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.802216       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.800976       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.801118       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.800964       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.801155       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.801132       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.806340       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.891658       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.891684       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 11:54:45.891691       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 11:54:45.895616       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13] <==
	I1217 11:54:43.171412       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:54:43.237639       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:43.337822       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:43.337879       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 11:54:43.338000       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:54:43.358634       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:54:43.358705       1 server_linux.go:136] "Using iptables Proxier"
	I1217 11:54:43.365039       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:54:43.365556       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 11:54:43.365582       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:43.366798       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:54:43.366822       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:54:43.366859       1 config.go:200] "Starting service config controller"
	I1217 11:54:43.366866       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:54:43.366884       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:54:43.366894       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:54:43.366978       1 config.go:309] "Starting node config controller"
	I1217 11:54:43.367002       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:54:43.367012       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:54:43.467498       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:54:43.467562       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:54:43.467562       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e8bbfa1edf5382c6ee1addad89bb4987d033da47ffe9ba3451542d30f8a2ba20] <==
	I1217 11:54:41.720957       1 serving.go:386] Generated self-signed cert in-memory
	W1217 11:54:42.577008       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 11:54:42.577070       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 11:54:42.577083       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 11:54:42.577091       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 11:54:42.615455       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 11:54:42.615489       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:42.619578       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 11:54:42.619715       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:54:42.619742       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:42.619764       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 11:54:42.722246       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.743809     719 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.750472     719 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-601829\" already exists" pod="kube-system/kube-scheduler-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.751628     719 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.751748     719 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.752032     719 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.752294     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-601829" containerName="kube-controller-manager"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764194     719 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-601829\" already exists" pod="kube-system/etcd-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764304     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-601829" containerName="etcd"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764308     719 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-601829\" already exists" pod="kube-system/kube-apiserver-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764194     719 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-601829\" already exists" pod="kube-system/kube-scheduler-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764422     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-601829" containerName="kube-scheduler"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764552     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-601829" containerName="kube-apiserver"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.812296     719 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.903459     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c3deb88-31c5-4008-aae7-7467aa3f9e81-lib-modules\") pod \"kindnet-t6q5x\" (UID: \"6c3deb88-31c5-4008-aae7-7467aa3f9e81\") " pod="kube-system/kindnet-t6q5x"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.903517     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f43b51-b45f-4c1c-a95f-3a34192b4334-lib-modules\") pod \"kube-proxy-grz2c\" (UID: \"35f43b51-b45f-4c1c-a95f-3a34192b4334\") " pod="kube-system/kube-proxy-grz2c"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.903556     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c3deb88-31c5-4008-aae7-7467aa3f9e81-xtables-lock\") pod \"kindnet-t6q5x\" (UID: \"6c3deb88-31c5-4008-aae7-7467aa3f9e81\") " pod="kube-system/kindnet-t6q5x"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.904763     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6c3deb88-31c5-4008-aae7-7467aa3f9e81-cni-cfg\") pod \"kindnet-t6q5x\" (UID: \"6c3deb88-31c5-4008-aae7-7467aa3f9e81\") " pod="kube-system/kindnet-t6q5x"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.904855     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35f43b51-b45f-4c1c-a95f-3a34192b4334-xtables-lock\") pod \"kube-proxy-grz2c\" (UID: \"35f43b51-b45f-4c1c-a95f-3a34192b4334\") " pod="kube-system/kube-proxy-grz2c"
	Dec 17 11:54:43 newest-cni-601829 kubelet[719]: E1217 11:54:43.757264     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-601829" containerName="kube-scheduler"
	Dec 17 11:54:43 newest-cni-601829 kubelet[719]: E1217 11:54:43.758026     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-601829" containerName="kube-apiserver"
	Dec 17 11:54:43 newest-cni-601829 kubelet[719]: E1217 11:54:43.758439     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-601829" containerName="etcd"
	Dec 17 11:54:44 newest-cni-601829 kubelet[719]: E1217 11:54:44.759080     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-601829" containerName="kube-apiserver"
	Dec 17 11:54:45 newest-cni-601829 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:54:45 newest-cni-601829 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:54:45 newest-cni-601829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-601829 -n newest-cni-601829
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-601829 -n newest-cni-601829: exit status 2 (341.339304ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-601829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jwmxw storage-provisioner dashboard-metrics-scraper-867fb5f87b-66bmg kubernetes-dashboard-b84665fb8-6cw68
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-601829 describe pod coredns-7d764666f9-jwmxw storage-provisioner dashboard-metrics-scraper-867fb5f87b-66bmg kubernetes-dashboard-b84665fb8-6cw68
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-601829 describe pod coredns-7d764666f9-jwmxw storage-provisioner dashboard-metrics-scraper-867fb5f87b-66bmg kubernetes-dashboard-b84665fb8-6cw68: exit status 1 (64.895116ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jwmxw" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-66bmg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-6cw68" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-601829 describe pod coredns-7d764666f9-jwmxw storage-provisioner dashboard-metrics-scraper-867fb5f87b-66bmg kubernetes-dashboard-b84665fb8-6cw68: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-601829
helpers_test.go:244: (dbg) docker inspect newest-cni-601829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e",
	        "Created": "2025-12-17T11:54:07.887432598Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1960279,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:54:34.339666939Z",
	            "FinishedAt": "2025-12-17T11:54:33.414946624Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/hosts",
	        "LogPath": "/var/lib/docker/containers/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e/0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e-json.log",
	        "Name": "/newest-cni-601829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-601829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-601829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0771ab9e37be2453b68ab5db994c5ce52049c42bc00cb57eb707c2e7a720dc5e",
	                "LowerDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92305c7bcbe6d858081478111c9813f7fbeea2f88c68af02f3a0efbfde18c491/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-601829",
	                "Source": "/var/lib/docker/volumes/newest-cni-601829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-601829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-601829",
	                "name.minikube.sigs.k8s.io": "newest-cni-601829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e618c87a359b5ed2b957ced4ca9d9e18363cea85094389e89601927a68916387",
	            "SandboxKey": "/var/run/docker/netns/e618c87a359b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34621"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34622"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34625"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34623"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34624"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-601829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba75054aca4fb8ab88e7766d66917111b8a98c9b6621d8d4536b729c295e0bd7",
	                    "EndpointID": "d0f8cf59792410181f71c9f68f260caba01888bfbdf4a8c63be8687dc12c0cb1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7a:72:8f:36:e9:1c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-601829",
	                        "0771ab9e37be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-601829 -n newest-cni-601829
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-601829 -n newest-cni-601829: exit status 2 (333.430918ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-601829 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-601829 logs -n 25: (1.047103172s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ pause   │ -p old-k8s-version-401285 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p kubernetes-upgrade-556754                                                                                                                                                                                                                       │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p disable-driver-mounts-618082                                                                                                                                                                                                                    │ disable-driver-mounts-618082 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ delete  │ -p stopped-upgrade-287611                                                                                                                                                                                                                          │ stopped-upgrade-287611       │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p no-preload-737478 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p newest-cni-601829 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-601829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p embed-certs-542273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p embed-certs-542273 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-737478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ image   │ newest-cni-601829 image list --format=json                                                                                                                                                                                                         │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ pause   │ -p newest-cni-601829 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-382022 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:54:43
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:54:43.372608 1963245 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:54:43.372929 1963245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:43.372940 1963245 out.go:374] Setting ErrFile to fd 2...
	I1217 11:54:43.372945 1963245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:43.373189 1963245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:54:43.373694 1963245 out.go:368] Setting JSON to false
	I1217 11:54:43.374972 1963245 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20228,"bootTime":1765952255,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:54:43.375046 1963245 start.go:143] virtualization: kvm guest
	I1217 11:54:43.376929 1963245 out.go:179] * [no-preload-737478] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:54:43.378750 1963245 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:54:43.378809 1963245 notify.go:221] Checking for updates...
	I1217 11:54:43.381007 1963245 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:54:43.382275 1963245 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:43.383341 1963245 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:54:43.384421 1963245 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:54:43.385472 1963245 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:54:43.386902 1963245 config.go:182] Loaded profile config "no-preload-737478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:43.387464 1963245 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:54:43.411926 1963245 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:54:43.412056 1963245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:43.468520 1963245 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-17 11:54:43.458780495 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:43.468675 1963245 docker.go:319] overlay module found
	I1217 11:54:43.470781 1963245 out.go:179] * Using the docker driver based on existing profile
	I1217 11:54:43.472215 1963245 start.go:309] selected driver: docker
	I1217 11:54:43.472233 1963245 start.go:927] validating driver "docker" against &{Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:43.472336 1963245 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:54:43.472956 1963245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:43.531701 1963245 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-17 11:54:43.520989371 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:43.532107 1963245 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:54:43.532155 1963245 cni.go:84] Creating CNI manager for ""
	I1217 11:54:43.532242 1963245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:43.532294 1963245 start.go:353] cluster config:
	{Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:43.534200 1963245 out.go:179] * Starting "no-preload-737478" primary control-plane node in "no-preload-737478" cluster
	I1217 11:54:43.535375 1963245 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:54:43.536573 1963245 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:54:43.537896 1963245 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:54:43.537991 1963245 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:54:43.538063 1963245 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/no-preload-737478/config.json ...
	I1217 11:54:43.538233 1963245 cache.go:107] acquiring lock: {Name:mkce365350b466caa625a853fa04d355dafaf737 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.538872 1963245 cache.go:107] acquiring lock: {Name:mk9b11255ca4aa317635277ae364f17e3f34e430 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.538880 1963245 cache.go:107] acquiring lock: {Name:mka9f0fd2d6e879a6d51520f3e35096f83561a39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.538926 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 11:54:43.538942 1963245 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 726.203µs
	I1217 11:54:43.538965 1963245 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 11:54:43.538585 1963245 cache.go:107] acquiring lock: {Name:mk195f08cb3604d752263934a40f27bac4021dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539005 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1217 11:54:43.538743 1963245 cache.go:107] acquiring lock: {Name:mkb34fd803350485ad0146dad2d5e5975c7a1fbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539019 1963245 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 146.962µs
	I1217 11:54:43.538704 1963245 cache.go:107] acquiring lock: {Name:mka6d3f4b4fc66993c428fbcff6e92cde119967c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539038 1963245 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539020 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1217 11:54:43.539068 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1217 11:54:43.538251 1963245 cache.go:107] acquiring lock: {Name:mk6a07e7ceeb8fe04825f0802eeaaeeee4c06443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539076 1963245 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 336.63µs
	I1217 11:54:43.539084 1963245 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539069 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1217 11:54:43.539097 1963245 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 529.938µs
	I1217 11:54:43.539105 1963245 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1217 11:54:43.539087 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 11:54:43.539123 1963245 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 423.999µs
	I1217 11:54:43.539130 1963245 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 11:54:43.539054 1963245 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 813µs
	I1217 11:54:43.539186 1963245 cache.go:107] acquiring lock: {Name:mk69f66d091b3517cc19ba9a659d980495d072d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539224 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1217 11:54:43.539238 1963245 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539239 1963245 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.001705ms
	I1217 11:54:43.539253 1963245 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539271 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1217 11:54:43.539280 1963245 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 507.252µs
	I1217 11:54:43.539289 1963245 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 11:54:43.539299 1963245 cache.go:87] Successfully saved all images to host disk.
	I1217 11:54:43.563571 1963245 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:54:43.563593 1963245 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:54:43.563614 1963245 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:54:43.563675 1963245 start.go:360] acquireMachinesLock for no-preload-737478: {Name:mk1ef5e7ed91896001178c3ee81911e4005528d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.563747 1963245 start.go:364] duration metric: took 49.755µs to acquireMachinesLock for "no-preload-737478"
	I1217 11:54:43.563771 1963245 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:54:43.563781 1963245 fix.go:54] fixHost starting: 
	I1217 11:54:43.564060 1963245 cli_runner.go:164] Run: docker container inspect no-preload-737478 --format={{.State.Status}}
	I1217 11:54:43.586018 1963245 fix.go:112] recreateIfNeeded on no-preload-737478: state=Stopped err=<nil>
	W1217 11:54:43.586071 1963245 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 11:54:43.272638 1960071 addons.go:530] duration metric: took 1.853650188s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 11:54:43.754696 1960071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:54:43.760260 1960071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 11:54:43.760297 1960071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 11:54:44.253919 1960071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:54:44.258064 1960071 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 11:54:44.259107 1960071 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 11:54:44.259132 1960071 api_server.go:131] duration metric: took 1.005429065s to wait for apiserver health ...
	I1217 11:54:44.259143 1960071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:54:44.262904 1960071 system_pods.go:59] 8 kube-system pods found
	I1217 11:54:44.262947 1960071 system_pods.go:61] "coredns-7d764666f9-jwmxw" [1daf4bf2-080a-49a2-ad9f-fea9cdbc571b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 11:54:44.262958 1960071 system_pods.go:61] "etcd-newest-cni-601829" [d71be3a5-4bd0-47e7-98ea-b50d6c2abd0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:54:44.262966 1960071 system_pods.go:61] "kindnet-t6q5x" [6c3deb88-31c5-4008-aae7-7467aa3f9e81] Running
	I1217 11:54:44.262975 1960071 system_pods.go:61] "kube-apiserver-newest-cni-601829" [eb175f99-213c-4663-bbf7-43c54202dbba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:54:44.262983 1960071 system_pods.go:61] "kube-controller-manager-newest-cni-601829" [f9d7a310-c545-49de-9def-714ba54d3bbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:54:44.262990 1960071 system_pods.go:61] "kube-proxy-grz2c" [35f43b51-b45f-4c1c-a95f-3a34192b4334] Running
	I1217 11:54:44.262999 1960071 system_pods.go:61] "kube-scheduler-newest-cni-601829" [79ecb056-ebc4-4c51-85a4-727a2d633751] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:54:44.263008 1960071 system_pods.go:61] "storage-provisioner" [3e2c9b6f-d0cc-48bc-ba8d-6da58cb1968d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 11:54:44.263024 1960071 system_pods.go:74] duration metric: took 3.874548ms to wait for pod list to return data ...
	I1217 11:54:44.263034 1960071 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:54:44.265253 1960071 default_sa.go:45] found service account: "default"
	I1217 11:54:44.265273 1960071 default_sa.go:55] duration metric: took 2.232696ms for default service account to be created ...
	I1217 11:54:44.265286 1960071 kubeadm.go:587] duration metric: took 2.84630377s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:54:44.265307 1960071 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:54:44.267523 1960071 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:54:44.267569 1960071 node_conditions.go:123] node cpu capacity is 8
	I1217 11:54:44.267608 1960071 node_conditions.go:105] duration metric: took 2.288058ms to run NodePressure ...
	I1217 11:54:44.267622 1960071 start.go:242] waiting for startup goroutines ...
	I1217 11:54:44.267631 1960071 start.go:247] waiting for cluster config update ...
	I1217 11:54:44.267642 1960071 start.go:256] writing updated cluster config ...
	I1217 11:54:44.267881 1960071 ssh_runner.go:195] Run: rm -f paused
	I1217 11:54:44.316576 1960071 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 11:54:44.318790 1960071 out.go:179] * Done! kubectl is now configured to use "newest-cni-601829" cluster and "default" namespace by default
	I1217 11:54:43.588365 1963245 out.go:252] * Restarting existing docker container for "no-preload-737478" ...
	I1217 11:54:43.588456 1963245 cli_runner.go:164] Run: docker start no-preload-737478
	I1217 11:54:43.861343 1963245 cli_runner.go:164] Run: docker container inspect no-preload-737478 --format={{.State.Status}}
	I1217 11:54:43.881641 1963245 kic.go:430] container "no-preload-737478" state is running.
	I1217 11:54:43.882117 1963245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-737478
	I1217 11:54:43.900993 1963245 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/no-preload-737478/config.json ...
	I1217 11:54:43.901268 1963245 machine.go:94] provisionDockerMachine start ...
	I1217 11:54:43.901362 1963245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:54:43.921514 1963245 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:43.921865 1963245 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34626 <nil> <nil>}
	I1217 11:54:43.921885 1963245 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:54:43.922567 1963245 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54250->127.0.0.1:34626: read: connection reset by peer
	I1217 11:54:47.058324 1963245 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-737478
	
	I1217 11:54:47.058354 1963245 ubuntu.go:182] provisioning hostname "no-preload-737478"
	I1217 11:54:47.058423 1963245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:54:47.079108 1963245 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:47.079321 1963245 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34626 <nil> <nil>}
	I1217 11:54:47.079334 1963245 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-737478 && echo "no-preload-737478" | sudo tee /etc/hostname
	I1217 11:54:47.228839 1963245 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-737478
	
	I1217 11:54:47.228950 1963245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:54:47.250286 1963245 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:47.250600 1963245 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34626 <nil> <nil>}
	I1217 11:54:47.250625 1963245 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-737478' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-737478/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-737478' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:54:47.383310 1963245 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:54:47.383342 1963245 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:54:47.383392 1963245 ubuntu.go:190] setting up certificates
	I1217 11:54:47.383403 1963245 provision.go:84] configureAuth start
	I1217 11:54:47.383453 1963245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-737478
	I1217 11:54:47.404176 1963245 provision.go:143] copyHostCerts
	I1217 11:54:47.404241 1963245 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:54:47.404259 1963245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:54:47.404329 1963245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:54:47.404447 1963245 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:54:47.404458 1963245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:54:47.404487 1963245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:54:47.404574 1963245 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:54:47.404589 1963245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:54:47.404629 1963245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:54:47.404713 1963245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.no-preload-737478 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-737478]
	I1217 11:54:47.580610 1963245 provision.go:177] copyRemoteCerts
	I1217 11:54:47.580664 1963245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:54:47.580697 1963245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:54:47.602147 1963245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34626 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/no-preload-737478/id_rsa Username:docker}
	I1217 11:54:47.702816 1963245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:54:47.725275 1963245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:54:47.747451 1963245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:54:47.770259 1963245 provision.go:87] duration metric: took 386.84353ms to configureAuth
	I1217 11:54:47.770290 1963245 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:54:47.770512 1963245 config.go:182] Loaded profile config "no-preload-737478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:47.770694 1963245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:54:47.792005 1963245 main.go:143] libmachine: Using SSH client type: native
	I1217 11:54:47.792266 1963245 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34626 <nil> <nil>}
	I1217 11:54:47.792285 1963245 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:54:48.148115 1963245 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:54:48.148144 1963245 machine.go:97] duration metric: took 4.246854485s to provisionDockerMachine
	I1217 11:54:48.148160 1963245 start.go:293] postStartSetup for "no-preload-737478" (driver="docker")
	I1217 11:54:48.148175 1963245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:54:48.148235 1963245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:54:48.148283 1963245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:54:48.169926 1963245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34626 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/no-preload-737478/id_rsa Username:docker}
	I1217 11:54:48.269865 1963245 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:54:48.273551 1963245 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:54:48.273586 1963245 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:54:48.273604 1963245 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:54:48.273666 1963245 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:54:48.273763 1963245 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:54:48.273874 1963245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:54:48.281938 1963245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:54:48.301489 1963245 start.go:296] duration metric: took 153.309297ms for postStartSetup
	I1217 11:54:48.301604 1963245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:54:48.301650 1963245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:54:48.321628 1963245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34626 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/no-preload-737478/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.021396906Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.026772111Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bdd6835e-156f-4ae9-8b29-aa91ce74d991 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.02869128Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=118f8ff2-129d-4b86-aeb0-c4255375e07a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.029965602Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.031910252Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.03289422Z" level=info msg="Ran pod sandbox fc82392128bcecba2201e5c268cdfef41da1505af1a4393286d259dfdde611fe with infra container: kube-system/kube-proxy-grz2c/POD" id=118f8ff2-129d-4b86-aeb0-c4255375e07a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.033459481Z" level=info msg="Ran pod sandbox 1af2e4acee0190491c2bd7e8767ed683ae8d5018c337248b6e7d6c2bdc3d6e7f with infra container: kube-system/kindnet-t6q5x/POD" id=bdd6835e-156f-4ae9-8b29-aa91ce74d991 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.035396513Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=98e5f9b4-396b-48de-9e25-bc329e4e8174 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.036061662Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=aaac7f44-e2c5-4007-9997-06c14434201e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.037054718Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=6b00cbc5-ce8c-425e-bb7d-05ec77caf608 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.037796165Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b4345a41-6ce7-42d1-a9dd-1ff164e0b354 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.038977552Z" level=info msg="Creating container: kube-system/kube-proxy-grz2c/kube-proxy" id=4baa61dd-27a2-4edc-bcfa-2989baad0b9b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.039120529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.039526089Z" level=info msg="Creating container: kube-system/kindnet-t6q5x/kindnet-cni" id=06beef90-4807-4001-9fca-e111a9abcf4f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.039664222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.046167842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.047021701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.047683688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.047028388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.103371539Z" level=info msg="Created container c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50: kube-system/kindnet-t6q5x/kindnet-cni" id=06beef90-4807-4001-9fca-e111a9abcf4f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.104246859Z" level=info msg="Starting container: c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50" id=556bf8d3-aecf-4bad-ab36-7ba1b0789cc3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.107495818Z" level=info msg="Started container" PID=1101 containerID=c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50 description=kube-system/kindnet-t6q5x/kindnet-cni id=556bf8d3-aecf-4bad-ab36-7ba1b0789cc3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1af2e4acee0190491c2bd7e8767ed683ae8d5018c337248b6e7d6c2bdc3d6e7f
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.111144184Z" level=info msg="Created container 669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13: kube-system/kube-proxy-grz2c/kube-proxy" id=4baa61dd-27a2-4edc-bcfa-2989baad0b9b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.112064589Z" level=info msg="Starting container: 669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13" id=0aafee8c-0e15-4adb-98b5-2fae5301fd84 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:54:43 newest-cni-601829 crio[564]: time="2025-12-17T11:54:43.118903066Z" level=info msg="Started container" PID=1102 containerID=669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13 description=kube-system/kube-proxy-grz2c/kube-proxy id=0aafee8c-0e15-4adb-98b5-2fae5301fd84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc82392128bcecba2201e5c268cdfef41da1505af1a4393286d259dfdde611fe
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c271c59be6e17       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   7 seconds ago       Running             kindnet-cni               1                   1af2e4acee019       kindnet-t6q5x                               kube-system
	669d3a2c60805       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   7 seconds ago       Running             kube-proxy                1                   fc82392128bce       kube-proxy-grz2c                            kube-system
	083255864c4b6       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   8 seconds ago       Running             kube-apiserver            1                   1371def546ba5       kube-apiserver-newest-cni-601829            kube-system
	deb1141598038       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   8 seconds ago       Running             kube-controller-manager   1                   562d3a1e8e939       kube-controller-manager-newest-cni-601829   kube-system
	e8bbfa1edf538       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   8 seconds ago       Running             kube-scheduler            1                   5e050b7475883       kube-scheduler-newest-cni-601829            kube-system
	9b48e50a2ca11       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   8 seconds ago       Running             etcd                      1                   9a5e7ab55d679       etcd-newest-cni-601829                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-601829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-601829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=newest-cni-601829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_54_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:54:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-601829
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:54:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:54:42 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:54:42 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:54:42 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 11:54:42 +0000   Wed, 17 Dec 2025 11:54:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-601829
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                bf12f87d-6e9a-4666-9ac7-1005cb2f7e7a
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-601829                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-t6q5x                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-601829             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-601829    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-grz2c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-601829             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  24s   node-controller  Node newest-cni-601829 event: Registered Node newest-cni-601829 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-601829 event: Registered Node newest-cni-601829 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [9b48e50a2ca11fa0201aa0e2c85a506d04ec5e8272efea4ebf4b314a2777a9a4] <==
	{"level":"info","ts":"2025-12-17T11:54:41.327990Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T11:54:41.328034Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T11:54:41.328088Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-17T11:54:41.328245Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T11:54:41.328390Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T11:54:41.328451Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T11:54:41.328506Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T11:54:41.617773Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:41.617837Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:41.617945Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:41.617970Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:54:41.617991Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:41.618794Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:41.618825Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:54:41.618847Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:41.618856Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:41.620250Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-601829 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T11:54:41.620254Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:54:41.620281Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:54:41.620491Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T11:54:41.620717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T11:54:41.621959Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:54:41.622399Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:54:41.627274Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T11:54:41.627347Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 11:54:50 up  5:37,  0 user,  load average: 4.71, 3.42, 2.26
	Linux newest-cni-601829 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c271c59be6e17471448233597e4874146b1546beb60afaceb1bde083ed358d50] <==
	I1217 11:54:43.348023       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:54:43.348342       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 11:54:43.348502       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:54:43.348527       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:54:43.348587       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:54:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:54:43.643399       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:54:43.643440       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:54:43.643457       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:54:43.643635       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:54:44.044026       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:54:44.044058       1 metrics.go:72] Registering metrics
	I1217 11:54:44.044177       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [083255864c4b6da94f3364ec07f1239b62d744e32359ba7f306864571bec5781] <==
	I1217 11:54:42.643820       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 11:54:42.644111       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:42.644130       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 11:54:42.644142       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:42.644184       1 aggregator.go:187] initial CRD sync complete...
	I1217 11:54:42.644199       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 11:54:42.644205       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:54:42.644212       1 cache.go:39] Caches are synced for autoregister controller
	E1217 11:54:42.650328       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 11:54:42.651217       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 11:54:42.672052       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:42.678414       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:54:42.910925       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:54:42.910925       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:54:42.961368       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:54:43.009765       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:54:43.045035       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:54:43.075904       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:54:43.160096       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.159.103"}
	I1217 11:54:43.172300       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.57.221"}
	I1217 11:54:43.546771       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 11:54:46.283733       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:54:46.384290       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:54:46.434087       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:54:46.484639       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [deb114159803887ef4c8f5a7bf03aff5659b8ba6236301758a913b5fb2be9360] <==
	I1217 11:54:45.796323       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.796439       1 range_allocator.go:177] "Sending events to api server"
	I1217 11:54:45.796510       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 11:54:45.796570       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:45.796602       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.797162       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.797196       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.797343       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.797382       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.799276       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.799383       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 11:54:45.799492       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-601829"
	I1217 11:54:45.799585       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 11:54:45.800871       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.802216       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.800976       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.801118       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.800964       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.801155       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.801132       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.806340       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.891658       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:45.891684       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 11:54:45.891691       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 11:54:45.895616       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [669d3a2c60805a49fa52f79a471480e1956d00294260490b38cb1e1c874a6e13] <==
	I1217 11:54:43.171412       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:54:43.237639       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:43.337822       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:43.337879       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 11:54:43.338000       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:54:43.358634       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:54:43.358705       1 server_linux.go:136] "Using iptables Proxier"
	I1217 11:54:43.365039       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:54:43.365556       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 11:54:43.365582       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:43.366798       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:54:43.366822       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:54:43.366859       1 config.go:200] "Starting service config controller"
	I1217 11:54:43.366866       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:54:43.366884       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:54:43.366894       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:54:43.366978       1 config.go:309] "Starting node config controller"
	I1217 11:54:43.367002       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:54:43.367012       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:54:43.467498       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:54:43.467562       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:54:43.467562       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e8bbfa1edf5382c6ee1addad89bb4987d033da47ffe9ba3451542d30f8a2ba20] <==
	I1217 11:54:41.720957       1 serving.go:386] Generated self-signed cert in-memory
	W1217 11:54:42.577008       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 11:54:42.577070       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 11:54:42.577083       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 11:54:42.577091       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 11:54:42.615455       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 11:54:42.615489       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:42.619578       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 11:54:42.619715       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:54:42.619742       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:42.619764       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 11:54:42.722246       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.743809     719 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.750472     719 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-601829\" already exists" pod="kube-system/kube-scheduler-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.751628     719 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.751748     719 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.752032     719 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.752294     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-601829" containerName="kube-controller-manager"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764194     719 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-601829\" already exists" pod="kube-system/etcd-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764304     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-601829" containerName="etcd"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764308     719 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-601829\" already exists" pod="kube-system/kube-apiserver-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764194     719 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-601829\" already exists" pod="kube-system/kube-scheduler-newest-cni-601829"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764422     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-601829" containerName="kube-scheduler"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: E1217 11:54:42.764552     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-601829" containerName="kube-apiserver"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.812296     719 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.903459     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c3deb88-31c5-4008-aae7-7467aa3f9e81-lib-modules\") pod \"kindnet-t6q5x\" (UID: \"6c3deb88-31c5-4008-aae7-7467aa3f9e81\") " pod="kube-system/kindnet-t6q5x"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.903517     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f43b51-b45f-4c1c-a95f-3a34192b4334-lib-modules\") pod \"kube-proxy-grz2c\" (UID: \"35f43b51-b45f-4c1c-a95f-3a34192b4334\") " pod="kube-system/kube-proxy-grz2c"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.903556     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c3deb88-31c5-4008-aae7-7467aa3f9e81-xtables-lock\") pod \"kindnet-t6q5x\" (UID: \"6c3deb88-31c5-4008-aae7-7467aa3f9e81\") " pod="kube-system/kindnet-t6q5x"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.904763     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6c3deb88-31c5-4008-aae7-7467aa3f9e81-cni-cfg\") pod \"kindnet-t6q5x\" (UID: \"6c3deb88-31c5-4008-aae7-7467aa3f9e81\") " pod="kube-system/kindnet-t6q5x"
	Dec 17 11:54:42 newest-cni-601829 kubelet[719]: I1217 11:54:42.904855     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35f43b51-b45f-4c1c-a95f-3a34192b4334-xtables-lock\") pod \"kube-proxy-grz2c\" (UID: \"35f43b51-b45f-4c1c-a95f-3a34192b4334\") " pod="kube-system/kube-proxy-grz2c"
	Dec 17 11:54:43 newest-cni-601829 kubelet[719]: E1217 11:54:43.757264     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-601829" containerName="kube-scheduler"
	Dec 17 11:54:43 newest-cni-601829 kubelet[719]: E1217 11:54:43.758026     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-601829" containerName="kube-apiserver"
	Dec 17 11:54:43 newest-cni-601829 kubelet[719]: E1217 11:54:43.758439     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-601829" containerName="etcd"
	Dec 17 11:54:44 newest-cni-601829 kubelet[719]: E1217 11:54:44.759080     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-601829" containerName="kube-apiserver"
	Dec 17 11:54:45 newest-cni-601829 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:54:45 newest-cni-601829 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:54:45 newest-cni-601829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-601829 -n newest-cni-601829
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-601829 -n newest-cni-601829: exit status 2 (353.363796ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-601829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jwmxw storage-provisioner dashboard-metrics-scraper-867fb5f87b-66bmg kubernetes-dashboard-b84665fb8-6cw68
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-601829 describe pod coredns-7d764666f9-jwmxw storage-provisioner dashboard-metrics-scraper-867fb5f87b-66bmg kubernetes-dashboard-b84665fb8-6cw68
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-601829 describe pod coredns-7d764666f9-jwmxw storage-provisioner dashboard-metrics-scraper-867fb5f87b-66bmg kubernetes-dashboard-b84665fb8-6cw68: exit status 1 (81.413799ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jwmxw" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-66bmg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-6cw68" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-601829 describe pod coredns-7d764666f9-jwmxw storage-provisioner dashboard-metrics-scraper-867fb5f87b-66bmg kubernetes-dashboard-b84665fb8-6cw68: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-382022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-382022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (266.194595ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:54:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-382022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-382022 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-382022 describe deploy/metrics-server -n kube-system: exit status 1 (65.310343ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-382022 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-382022
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-382022:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7",
	        "Created": "2025-12-17T11:54:00.607547087Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1951307,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:54:00.823562483Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/hosts",
	        "LogPath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7-json.log",
	        "Name": "/default-k8s-diff-port-382022",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-382022:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-382022",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7",
	                "LowerDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-382022",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-382022/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-382022",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-382022",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-382022",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0b5d134d9b12e66f2c905b9a7f2304c334fb0233c3588c29cc6223a0a9619740",
	            "SandboxKey": "/var/run/docker/netns/0b5d134d9b12",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34611"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34612"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34615"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34613"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34614"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-382022": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "009b4cca67d182f2097fba9336c46a1ff7237dab7ad046bb8a1746aae27ee661",
	                    "EndpointID": "1cfed478c989de038b6ed25121aec799a1df0a1335112701eb7c36d053a9bd8d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ee:e3:9c:e4:c9:6d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-382022",
	                        "4b7e99a28ab9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-382022 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-382022 logs -n 25: (1.110737557s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ image   │ old-k8s-version-401285 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ pause   │ -p old-k8s-version-401285 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p kubernetes-upgrade-556754                                                                                                                                                                                                                       │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p disable-driver-mounts-618082                                                                                                                                                                                                                    │ disable-driver-mounts-618082 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ delete  │ -p stopped-upgrade-287611                                                                                                                                                                                                                          │ stopped-upgrade-287611       │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p no-preload-737478 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p newest-cni-601829 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-601829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p embed-certs-542273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p embed-certs-542273 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-737478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ image   │ newest-cni-601829 image list --format=json                                                                                                                                                                                                         │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ pause   │ -p newest-cni-601829 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:54:43
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:54:43.372608 1963245 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:54:43.372929 1963245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:43.372940 1963245 out.go:374] Setting ErrFile to fd 2...
	I1217 11:54:43.372945 1963245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:54:43.373189 1963245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:54:43.373694 1963245 out.go:368] Setting JSON to false
	I1217 11:54:43.374972 1963245 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20228,"bootTime":1765952255,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:54:43.375046 1963245 start.go:143] virtualization: kvm guest
	I1217 11:54:43.376929 1963245 out.go:179] * [no-preload-737478] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:54:43.378750 1963245 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:54:43.378809 1963245 notify.go:221] Checking for updates...
	I1217 11:54:43.381007 1963245 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:54:43.382275 1963245 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:54:43.383341 1963245 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:54:43.384421 1963245 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:54:43.385472 1963245 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:54:43.386902 1963245 config.go:182] Loaded profile config "no-preload-737478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:54:43.387464 1963245 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:54:43.411926 1963245 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:54:43.412056 1963245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:43.468520 1963245 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-17 11:54:43.458780495 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:43.468675 1963245 docker.go:319] overlay module found
	I1217 11:54:43.470781 1963245 out.go:179] * Using the docker driver based on existing profile
	I1217 11:54:43.472215 1963245 start.go:309] selected driver: docker
	I1217 11:54:43.472233 1963245 start.go:927] validating driver "docker" against &{Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:43.472336 1963245 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:54:43.472956 1963245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:54:43.531701 1963245 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-17 11:54:43.520989371 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:54:43.532107 1963245 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:54:43.532155 1963245 cni.go:84] Creating CNI manager for ""
	I1217 11:54:43.532242 1963245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:54:43.532294 1963245 start.go:353] cluster config:
	{Name:no-preload-737478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-737478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:54:43.534200 1963245 out.go:179] * Starting "no-preload-737478" primary control-plane node in "no-preload-737478" cluster
	I1217 11:54:43.535375 1963245 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:54:43.536573 1963245 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:54:43.537896 1963245 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:54:43.537991 1963245 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:54:43.538063 1963245 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/no-preload-737478/config.json ...
	I1217 11:54:43.538233 1963245 cache.go:107] acquiring lock: {Name:mkce365350b466caa625a853fa04d355dafaf737 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.538872 1963245 cache.go:107] acquiring lock: {Name:mk9b11255ca4aa317635277ae364f17e3f34e430 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.538880 1963245 cache.go:107] acquiring lock: {Name:mka9f0fd2d6e879a6d51520f3e35096f83561a39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.538926 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 11:54:43.538942 1963245 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 726.203µs
	I1217 11:54:43.538965 1963245 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 11:54:43.538585 1963245 cache.go:107] acquiring lock: {Name:mk195f08cb3604d752263934a40f27bac4021dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539005 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1217 11:54:43.538743 1963245 cache.go:107] acquiring lock: {Name:mkb34fd803350485ad0146dad2d5e5975c7a1fbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539019 1963245 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 146.962µs
	I1217 11:54:43.538704 1963245 cache.go:107] acquiring lock: {Name:mka6d3f4b4fc66993c428fbcff6e92cde119967c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539038 1963245 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539020 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1217 11:54:43.539068 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1217 11:54:43.538251 1963245 cache.go:107] acquiring lock: {Name:mk6a07e7ceeb8fe04825f0802eeaaeeee4c06443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539076 1963245 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 336.63µs
	I1217 11:54:43.539084 1963245 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539069 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1217 11:54:43.539097 1963245 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 529.938µs
	I1217 11:54:43.539105 1963245 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1217 11:54:43.539087 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 11:54:43.539123 1963245 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 423.999µs
	I1217 11:54:43.539130 1963245 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 11:54:43.539054 1963245 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 813µs
	I1217 11:54:43.539186 1963245 cache.go:107] acquiring lock: {Name:mk69f66d091b3517cc19ba9a659d980495d072d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.539224 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1217 11:54:43.539238 1963245 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539239 1963245 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.001705ms
	I1217 11:54:43.539253 1963245 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1217 11:54:43.539271 1963245 cache.go:115] /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1217 11:54:43.539280 1963245 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 507.252µs
	I1217 11:54:43.539289 1963245 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 11:54:43.539299 1963245 cache.go:87] Successfully saved all images to host disk.
	I1217 11:54:43.563571 1963245 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:54:43.563593 1963245 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:54:43.563614 1963245 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:54:43.563675 1963245 start.go:360] acquireMachinesLock for no-preload-737478: {Name:mk1ef5e7ed91896001178c3ee81911e4005528d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:54:43.563747 1963245 start.go:364] duration metric: took 49.755µs to acquireMachinesLock for "no-preload-737478"
	I1217 11:54:43.563771 1963245 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:54:43.563781 1963245 fix.go:54] fixHost starting: 
	I1217 11:54:43.564060 1963245 cli_runner.go:164] Run: docker container inspect no-preload-737478 --format={{.State.Status}}
	I1217 11:54:43.586018 1963245 fix.go:112] recreateIfNeeded on no-preload-737478: state=Stopped err=<nil>
	W1217 11:54:43.586071 1963245 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 11:54:43.272638 1960071 addons.go:530] duration metric: took 1.853650188s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 11:54:43.754696 1960071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:54:43.760260 1960071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 11:54:43.760297 1960071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 11:54:44.253919 1960071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:54:44.258064 1960071 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 11:54:44.259107 1960071 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 11:54:44.259132 1960071 api_server.go:131] duration metric: took 1.005429065s to wait for apiserver health ...
	I1217 11:54:44.259143 1960071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:54:44.262904 1960071 system_pods.go:59] 8 kube-system pods found
	I1217 11:54:44.262947 1960071 system_pods.go:61] "coredns-7d764666f9-jwmxw" [1daf4bf2-080a-49a2-ad9f-fea9cdbc571b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 11:54:44.262958 1960071 system_pods.go:61] "etcd-newest-cni-601829" [d71be3a5-4bd0-47e7-98ea-b50d6c2abd0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:54:44.262966 1960071 system_pods.go:61] "kindnet-t6q5x" [6c3deb88-31c5-4008-aae7-7467aa3f9e81] Running
	I1217 11:54:44.262975 1960071 system_pods.go:61] "kube-apiserver-newest-cni-601829" [eb175f99-213c-4663-bbf7-43c54202dbba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:54:44.262983 1960071 system_pods.go:61] "kube-controller-manager-newest-cni-601829" [f9d7a310-c545-49de-9def-714ba54d3bbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:54:44.262990 1960071 system_pods.go:61] "kube-proxy-grz2c" [35f43b51-b45f-4c1c-a95f-3a34192b4334] Running
	I1217 11:54:44.262999 1960071 system_pods.go:61] "kube-scheduler-newest-cni-601829" [79ecb056-ebc4-4c51-85a4-727a2d633751] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:54:44.263008 1960071 system_pods.go:61] "storage-provisioner" [3e2c9b6f-d0cc-48bc-ba8d-6da58cb1968d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 11:54:44.263024 1960071 system_pods.go:74] duration metric: took 3.874548ms to wait for pod list to return data ...
	I1217 11:54:44.263034 1960071 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:54:44.265253 1960071 default_sa.go:45] found service account: "default"
	I1217 11:54:44.265273 1960071 default_sa.go:55] duration metric: took 2.232696ms for default service account to be created ...
	I1217 11:54:44.265286 1960071 kubeadm.go:587] duration metric: took 2.84630377s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:54:44.265307 1960071 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:54:44.267523 1960071 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:54:44.267569 1960071 node_conditions.go:123] node cpu capacity is 8
	I1217 11:54:44.267608 1960071 node_conditions.go:105] duration metric: took 2.288058ms to run NodePressure ...
	I1217 11:54:44.267622 1960071 start.go:242] waiting for startup goroutines ...
	I1217 11:54:44.267631 1960071 start.go:247] waiting for cluster config update ...
	I1217 11:54:44.267642 1960071 start.go:256] writing updated cluster config ...
	I1217 11:54:44.267881 1960071 ssh_runner.go:195] Run: rm -f paused
	I1217 11:54:44.316576 1960071 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 11:54:44.318790 1960071 out.go:179] * Done! kubectl is now configured to use "newest-cni-601829" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 11:54:35 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:35.042964366Z" level=info msg="Starting container: c4874621f7737ab1f0889f2d69de8dd74b9e6b47fdc275fc11b675ba4d766ff8" id=ee69f830-1ee4-438c-9643-15d9c52c9a21 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:54:35 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:35.045188594Z" level=info msg="Started container" PID=1941 containerID=c4874621f7737ab1f0889f2d69de8dd74b9e6b47fdc275fc11b675ba4d766ff8 description=kube-system/coredns-66bc5c9577-8nz5c/coredns id=ee69f830-1ee4-438c-9643-15d9c52c9a21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d897314194bc23adb9fce0b26b390b44bccae00d34f96a9cd8cb7f26acb2e45d
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.521682582Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ca358659-5d11-41d0-baa0-3485c43eb248 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.521771109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.527428157Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4373acba27d0b49d85933971b3def492838e2f0eada4c39da7fb432518b4359c UID:da1d1f67-6ece-4cec-89b1-7a562b3d92a5 NetNS:/var/run/netns/44f8872f-c407-41bb-a634-e2fe49fae61f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000613210}] Aliases:map[]}"
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.527476181Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.538894153Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4373acba27d0b49d85933971b3def492838e2f0eada4c39da7fb432518b4359c UID:da1d1f67-6ece-4cec-89b1-7a562b3d92a5 NetNS:/var/run/netns/44f8872f-c407-41bb-a634-e2fe49fae61f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000613210}] Aliases:map[]}"
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.539099481Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.540170119Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.541443061Z" level=info msg="Ran pod sandbox 4373acba27d0b49d85933971b3def492838e2f0eada4c39da7fb432518b4359c with infra container: default/busybox/POD" id=ca358659-5d11-41d0-baa0-3485c43eb248 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.542909039Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=744af32a-3ef2-422b-8623-a94ba3edb573 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.543056099Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=744af32a-3ef2-422b-8623-a94ba3edb573 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.543119428Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=744af32a-3ef2-422b-8623-a94ba3edb573 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.543772058Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f5ac2069-f125-4dba-9290-75632d59c4ff name=/runtime.v1.ImageService/PullImage
	Dec 17 11:54:38 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:38.545335247Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 11:54:40 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:40.574082862Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f5ac2069-f125-4dba-9290-75632d59c4ff name=/runtime.v1.ImageService/PullImage
	Dec 17 11:54:40 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:40.574749487Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9f266dbc-e831-4603-a61f-0e9ffa481e8b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:40 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:40.57606588Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aecb596b-4967-4c6c-92cb-374ea9bdfb9f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:54:40 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:40.579286044Z" level=info msg="Creating container: default/busybox/busybox" id=2eca5613-b34d-4295-ac4e-be79c9fb3ff3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:40 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:40.579429856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:40 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:40.583137328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:40 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:40.583734406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:54:40 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:40.617830561Z" level=info msg="Created container 758bab409baa6e7625069bbe36d9e863e2bada7e5ccd165b323e9405fead9d53: default/busybox/busybox" id=2eca5613-b34d-4295-ac4e-be79c9fb3ff3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:54:40 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:40.618505046Z" level=info msg="Starting container: 758bab409baa6e7625069bbe36d9e863e2bada7e5ccd165b323e9405fead9d53" id=5096ad7c-ebc2-492b-8e96-ef1415355c8e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:54:40 default-k8s-diff-port-382022 crio[824]: time="2025-12-17T11:54:40.620259668Z" level=info msg="Started container" PID=2017 containerID=758bab409baa6e7625069bbe36d9e863e2bada7e5ccd165b323e9405fead9d53 description=default/busybox/busybox id=5096ad7c-ebc2-492b-8e96-ef1415355c8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4373acba27d0b49d85933971b3def492838e2f0eada4c39da7fb432518b4359c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	758bab409baa6       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   4373acba27d0b       busybox                                                default
	c4874621f7737       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   d897314194bc2       coredns-66bc5c9577-8nz5c                               kube-system
	9bd265586942d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   21a87f5461dd3       storage-provisioner                                    kube-system
	7ef65b4b7cab5       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   a81a25d8fa466       kindnet-lsrk2                                          kube-system
	94824d07637b4       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      26 seconds ago      Running             kube-proxy                0                   d39b8a9340533       kube-proxy-ss2p8                                       kube-system
	abd99ec302613       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      36 seconds ago      Running             kube-apiserver            0                   a26c292837bc6       kube-apiserver-default-k8s-diff-port-382022            kube-system
	77ca8577e7dce       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      36 seconds ago      Running             kube-controller-manager   0                   a99dbce233135       kube-controller-manager-default-k8s-diff-port-382022   kube-system
	22bc036e64a0b       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      36 seconds ago      Running             kube-scheduler            0                   271b580fb2cc7       kube-scheduler-default-k8s-diff-port-382022            kube-system
	c910fa497cb8e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      36 seconds ago      Running             etcd                      0                   9f4221594fbe6       etcd-default-k8s-diff-port-382022                      kube-system
	
	
	==> coredns [c4874621f7737ab1f0889f2d69de8dd74b9e6b47fdc275fc11b675ba4d766ff8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59062 - 10833 "HINFO IN 5023509618935129781.8569631684681014332. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033932256s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-382022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-382022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=default-k8s-diff-port-382022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_54_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:54:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-382022
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:54:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:54:46 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:54:46 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:54:46 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:54:46 +0000   Wed, 17 Dec 2025 11:54:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-382022
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                1aeb2617-3121-4d2f-838a-f21c8acff3cb
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-8nz5c                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-382022                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-lsrk2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-382022             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-382022    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-ss2p8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-382022             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node default-k8s-diff-port-382022 event: Registered Node default-k8s-diff-port-382022 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-382022 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [c910fa497cb8e34b838db10d285a56705f605d2fb21b884b9c37440f26b671e6] <==
	{"level":"warn","ts":"2025-12-17T11:54:12.435503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.444337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.451112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.458264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.464548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.471255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.479019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.485394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.493400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.514699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.522002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.528872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.536140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.543863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.550797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.557701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.565099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.572099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.580308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.588327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.596006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.611372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.618467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.625090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:54:12.679666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58120","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:54:47 up  5:37,  0 user,  load average: 4.86, 3.43, 2.25
	Linux default-k8s-diff-port-382022 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7ef65b4b7cab555285b613a8fbb0141ecdd0c124b70d7d0c2115372b70d2ab8d] <==
	I1217 11:54:24.342233       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:54:24.342594       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 11:54:24.342774       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:54:24.342800       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:54:24.342824       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:54:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:54:24.582882       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:54:24.582929       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:54:24.582941       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:54:24.583111       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:54:24.983382       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:54:24.983414       1 metrics.go:72] Registering metrics
	I1217 11:54:24.983476       1 controller.go:711] "Syncing nftables rules"
	I1217 11:54:34.585671       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:54:34.585749       1 main.go:301] handling current node
	I1217 11:54:44.585634       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:54:44.585688       1 main.go:301] handling current node
	
	
	==> kube-apiserver [abd99ec30261357d4ad01cbd8a41f34dea12bddf8827b147777238a592832236] <==
	E1217 11:54:13.247643       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1217 11:54:13.279332       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:54:13.284420       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:13.284686       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 11:54:13.292360       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:13.292450       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 11:54:13.450690       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:54:14.080043       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 11:54:14.085774       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 11:54:14.085800       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:54:14.634044       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:54:14.678643       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:54:14.787624       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 11:54:14.793821       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1217 11:54:14.795042       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:54:14.799355       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:54:15.106051       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:54:15.994649       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:54:16.004168       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 11:54:16.012445       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 11:54:20.408208       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:54:20.760493       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:20.765641       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:20.909087       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1217 11:54:46.320089       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:56928: use of closed network connection
	
	
	==> kube-controller-manager [77ca8577e7dcef50a447f81c667005cb1d8d5791c59d669977225bb8c78e5bb5] <==
	I1217 11:54:20.104052       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 11:54:20.104191       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 11:54:20.105092       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 11:54:20.105123       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 11:54:20.105301       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 11:54:20.105688       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 11:54:20.105824       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 11:54:20.105847       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 11:54:20.105891       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 11:54:20.105910       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 11:54:20.105987       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 11:54:20.105909       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 11:54:20.108408       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 11:54:20.109573       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 11:54:20.109678       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 11:54:20.110946       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 11:54:20.112139       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 11:54:20.120601       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:54:20.130931       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:54:20.137853       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 11:54:20.138199       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 11:54:20.139261       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 11:54:20.139486       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 11:54:20.173159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:54:35.095074       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [94824d07637b4132a9e901f5171e282beb89120481a9bd0af8d7be2ba4261a5c] <==
	I1217 11:54:21.345985       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:54:21.423104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:54:21.523373       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:54:21.523440       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 11:54:21.523550       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:54:21.555897       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:54:21.556041       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:54:21.577015       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:54:21.580106       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:54:21.580251       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:21.585051       1 config.go:200] "Starting service config controller"
	I1217 11:54:21.585777       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:54:21.586004       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:54:21.586157       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:54:21.586376       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:54:21.586662       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:54:21.586883       1 config.go:309] "Starting node config controller"
	I1217 11:54:21.586899       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:54:21.586906       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:54:21.687247       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:54:21.688104       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 11:54:21.688140       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [22bc036e64a0b1c24cf18738aca4483bd802efc0d09ffe1ad2bb0a7e52717e4b] <==
	E1217 11:54:13.130262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 11:54:13.130259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 11:54:13.130381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 11:54:13.130387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 11:54:13.130396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:54:13.130486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:54:13.130498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 11:54:13.130559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 11:54:13.130606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 11:54:13.130704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:54:13.934901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:54:14.006279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:54:14.093179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 11:54:14.113842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 11:54:14.177354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 11:54:14.189569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 11:54:14.208191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 11:54:14.209336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 11:54:14.214273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:54:14.268819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 11:54:14.284972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 11:54:14.289425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 11:54:14.316782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 11:54:14.348331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1217 11:54:16.325501       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:54:16 default-k8s-diff-port-382022 kubelet[1353]: E1217 11:54:16.880312    1353 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-382022\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-382022"
	Dec 17 11:54:16 default-k8s-diff-port-382022 kubelet[1353]: E1217 11:54:16.880691    1353 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-382022\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-382022"
	Dec 17 11:54:16 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:16.901981    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-382022" podStartSLOduration=1.9019580299999999 podStartE2EDuration="1.90195803s" podCreationTimestamp="2025-12-17 11:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:16.889847473 +0000 UTC m=+1.141687106" watchObservedRunningTime="2025-12-17 11:54:16.90195803 +0000 UTC m=+1.153797658"
	Dec 17 11:54:16 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:16.902140    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-382022" podStartSLOduration=1.902128464 podStartE2EDuration="1.902128464s" podCreationTimestamp="2025-12-17 11:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:16.902050518 +0000 UTC m=+1.153890153" watchObservedRunningTime="2025-12-17 11:54:16.902128464 +0000 UTC m=+1.153968096"
	Dec 17 11:54:20 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:20.189626    1353 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 11:54:20 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:20.190307    1353 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 11:54:20 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:20.957786    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdj7j\" (UniqueName: \"kubernetes.io/projected/59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c-kube-api-access-mdj7j\") pod \"kindnet-lsrk2\" (UID: \"59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c\") " pod="kube-system/kindnet-lsrk2"
	Dec 17 11:54:20 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:20.957848    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c-lib-modules\") pod \"kindnet-lsrk2\" (UID: \"59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c\") " pod="kube-system/kindnet-lsrk2"
	Dec 17 11:54:20 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:20.957880    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhws\" (UniqueName: \"kubernetes.io/projected/d7f7db01-8945-4a8f-aa14-c6f50ac56824-kube-api-access-7jhws\") pod \"kube-proxy-ss2p8\" (UID: \"d7f7db01-8945-4a8f-aa14-c6f50ac56824\") " pod="kube-system/kube-proxy-ss2p8"
	Dec 17 11:54:20 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:20.958094    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7f7db01-8945-4a8f-aa14-c6f50ac56824-xtables-lock\") pod \"kube-proxy-ss2p8\" (UID: \"d7f7db01-8945-4a8f-aa14-c6f50ac56824\") " pod="kube-system/kube-proxy-ss2p8"
	Dec 17 11:54:20 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:20.958148    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7f7db01-8945-4a8f-aa14-c6f50ac56824-lib-modules\") pod \"kube-proxy-ss2p8\" (UID: \"d7f7db01-8945-4a8f-aa14-c6f50ac56824\") " pod="kube-system/kube-proxy-ss2p8"
	Dec 17 11:54:20 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:20.958182    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c-xtables-lock\") pod \"kindnet-lsrk2\" (UID: \"59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c\") " pod="kube-system/kindnet-lsrk2"
	Dec 17 11:54:20 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:20.958203    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d7f7db01-8945-4a8f-aa14-c6f50ac56824-kube-proxy\") pod \"kube-proxy-ss2p8\" (UID: \"d7f7db01-8945-4a8f-aa14-c6f50ac56824\") " pod="kube-system/kube-proxy-ss2p8"
	Dec 17 11:54:20 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:20.958227    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c-cni-cfg\") pod \"kindnet-lsrk2\" (UID: \"59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c\") " pod="kube-system/kindnet-lsrk2"
	Dec 17 11:54:24 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:24.328156    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ss2p8" podStartSLOduration=4.328132959 podStartE2EDuration="4.328132959s" podCreationTimestamp="2025-12-17 11:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:21.893600305 +0000 UTC m=+6.145439938" watchObservedRunningTime="2025-12-17 11:54:24.328132959 +0000 UTC m=+8.579972592"
	Dec 17 11:54:24 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:24.907627    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lsrk2" podStartSLOduration=2.071014945 podStartE2EDuration="4.907602022s" podCreationTimestamp="2025-12-17 11:54:20 +0000 UTC" firstStartedPulling="2025-12-17 11:54:21.244721219 +0000 UTC m=+5.496560845" lastFinishedPulling="2025-12-17 11:54:24.081308311 +0000 UTC m=+8.333147922" observedRunningTime="2025-12-17 11:54:24.907392344 +0000 UTC m=+9.159231978" watchObservedRunningTime="2025-12-17 11:54:24.907602022 +0000 UTC m=+9.159441655"
	Dec 17 11:54:34 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:34.652213    1353 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 11:54:34 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:34.760575    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mttwt\" (UniqueName: \"kubernetes.io/projected/973e9e2c-a15b-4a45-8d2f-955f94325749-kube-api-access-mttwt\") pod \"storage-provisioner\" (UID: \"973e9e2c-a15b-4a45-8d2f-955f94325749\") " pod="kube-system/storage-provisioner"
	Dec 17 11:54:34 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:34.760634    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d-config-volume\") pod \"coredns-66bc5c9577-8nz5c\" (UID: \"7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d\") " pod="kube-system/coredns-66bc5c9577-8nz5c"
	Dec 17 11:54:34 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:34.760660    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/973e9e2c-a15b-4a45-8d2f-955f94325749-tmp\") pod \"storage-provisioner\" (UID: \"973e9e2c-a15b-4a45-8d2f-955f94325749\") " pod="kube-system/storage-provisioner"
	Dec 17 11:54:34 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:34.760678    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k67xn\" (UniqueName: \"kubernetes.io/projected/7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d-kube-api-access-k67xn\") pod \"coredns-66bc5c9577-8nz5c\" (UID: \"7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d\") " pod="kube-system/coredns-66bc5c9577-8nz5c"
	Dec 17 11:54:35 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:35.929763    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8nz5c" podStartSLOduration=14.929740978 podStartE2EDuration="14.929740978s" podCreationTimestamp="2025-12-17 11:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:35.929516751 +0000 UTC m=+20.181356385" watchObservedRunningTime="2025-12-17 11:54:35.929740978 +0000 UTC m=+20.181580610"
	Dec 17 11:54:35 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:35.939375    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.939352323 podStartE2EDuration="13.939352323s" podCreationTimestamp="2025-12-17 11:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 11:54:35.939195966 +0000 UTC m=+20.191035600" watchObservedRunningTime="2025-12-17 11:54:35.939352323 +0000 UTC m=+20.191191959"
	Dec 17 11:54:38 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:38.280212    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfpx\" (UniqueName: \"kubernetes.io/projected/da1d1f67-6ece-4cec-89b1-7a562b3d92a5-kube-api-access-jdfpx\") pod \"busybox\" (UID: \"da1d1f67-6ece-4cec-89b1-7a562b3d92a5\") " pod="default/busybox"
	Dec 17 11:54:40 default-k8s-diff-port-382022 kubelet[1353]: I1217 11:54:40.946438    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.91425647 podStartE2EDuration="2.946421361s" podCreationTimestamp="2025-12-17 11:54:38 +0000 UTC" firstStartedPulling="2025-12-17 11:54:38.543363564 +0000 UTC m=+22.795203192" lastFinishedPulling="2025-12-17 11:54:40.575528456 +0000 UTC m=+24.827368083" observedRunningTime="2025-12-17 11:54:40.946290264 +0000 UTC m=+25.198129898" watchObservedRunningTime="2025-12-17 11:54:40.946421361 +0000 UTC m=+25.198260993"
	
	
	==> storage-provisioner [9bd265586942dce0efd7bfd2ffe94a5c2494e442110ee448b5888e9c08fac39f] <==
	I1217 11:54:35.043391       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:54:35.054095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:54:35.054156       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 11:54:35.056615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:35.062054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:54:35.062255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:54:35.062451       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-382022_8b446fba-f456-44f5-bb11-bd8fdeff487e!
	I1217 11:54:35.062755       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7879bd5-c601-4a2a-a916-1dac80f7bd21", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-382022_8b446fba-f456-44f5-bb11-bd8fdeff487e became leader
	W1217 11:54:35.064940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:35.068792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:54:35.163026       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-382022_8b446fba-f456-44f5-bb11-bd8fdeff487e!
	W1217 11:54:37.072710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:37.077342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:39.081241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:39.086841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:41.090260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:41.095163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:43.099821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:43.107848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:45.111841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:45.116242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:47.120805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:54:47.126191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-382022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-737478 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-737478 --alsologtostderr -v=1: exit status 80 (1.667057218s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-737478 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:55:46.731508 1977694 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:55:46.731626 1977694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:46.731634 1977694 out.go:374] Setting ErrFile to fd 2...
	I1217 11:55:46.731637 1977694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:46.731844 1977694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:55:46.732080 1977694 out.go:368] Setting JSON to false
	I1217 11:55:46.732103 1977694 mustload.go:66] Loading cluster: no-preload-737478
	I1217 11:55:46.732493 1977694 config.go:182] Loaded profile config "no-preload-737478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:55:46.732943 1977694 cli_runner.go:164] Run: docker container inspect no-preload-737478 --format={{.State.Status}}
	I1217 11:55:46.751211 1977694 host.go:66] Checking if "no-preload-737478" exists ...
	I1217 11:55:46.751518 1977694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:46.810863 1977694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-17 11:55:46.800447595 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:55:46.811747 1977694 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-737478 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 11:55:46.814075 1977694 out.go:179] * Pausing node no-preload-737478 ... 
	I1217 11:55:46.815502 1977694 host.go:66] Checking if "no-preload-737478" exists ...
	I1217 11:55:46.815827 1977694 ssh_runner.go:195] Run: systemctl --version
	I1217 11:55:46.815875 1977694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-737478
	I1217 11:55:46.834502 1977694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34626 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/no-preload-737478/id_rsa Username:docker}
	I1217 11:55:46.928807 1977694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:46.958929 1977694 pause.go:52] kubelet running: true
	I1217 11:55:46.958991 1977694 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:55:47.135644 1977694 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:55:47.135761 1977694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:55:47.206606 1977694 cri.go:89] found id: "1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4"
	I1217 11:55:47.206628 1977694 cri.go:89] found id: "8c0fc19eb2c75cce822364a4b57ad3d996f36be504d773da1f4a3833e438910b"
	I1217 11:55:47.206631 1977694 cri.go:89] found id: "e366a6880a7038192225e1a0e3f1dfae39b7b0e063b30315983cee12d05f0372"
	I1217 11:55:47.206635 1977694 cri.go:89] found id: "c3857941ca2aae221674eea310f456831e8d058f682132b671e62d0c96c1fc17"
	I1217 11:55:47.206639 1977694 cri.go:89] found id: "1d2b1bb8a1b76843007cd338cd29ad6ab7ffd7691330930addf1432fa7421ec5"
	I1217 11:55:47.206644 1977694 cri.go:89] found id: "dfa862cc6c124cbff58725fd6b60cb1a8b9eefcaf56e3fc283931533b497b6f9"
	I1217 11:55:47.206647 1977694 cri.go:89] found id: "8e9c5260713310721b633c55bf538fd5250281666a4f79e7afb0e39f48e8752a"
	I1217 11:55:47.206650 1977694 cri.go:89] found id: "59ffeef8ed7039998fb2d90ffdb8f586577c7fac1aeca5d33293a0883dcf6fe1"
	I1217 11:55:47.206653 1977694 cri.go:89] found id: "2927eecd91f4b36c104d665f79cbb47dbc7e16d7f360c6a4e4e977b70d7eaf43"
	I1217 11:55:47.206666 1977694 cri.go:89] found id: "0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee"
	I1217 11:55:47.206669 1977694 cri.go:89] found id: "fd3aeebcdab235840cddfdf9ae02671bf3de7091045cb6660338a7cb39e126c4"
	I1217 11:55:47.206673 1977694 cri.go:89] found id: ""
	I1217 11:55:47.206720 1977694 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:55:47.219751 1977694 retry.go:31] will retry after 272.587264ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:47Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:55:47.493312 1977694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:47.508066 1977694 pause.go:52] kubelet running: false
	I1217 11:55:47.508130 1977694 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:55:47.658796 1977694 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:55:47.658889 1977694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:55:47.730649 1977694 cri.go:89] found id: "1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4"
	I1217 11:55:47.730678 1977694 cri.go:89] found id: "8c0fc19eb2c75cce822364a4b57ad3d996f36be504d773da1f4a3833e438910b"
	I1217 11:55:47.730684 1977694 cri.go:89] found id: "e366a6880a7038192225e1a0e3f1dfae39b7b0e063b30315983cee12d05f0372"
	I1217 11:55:47.730690 1977694 cri.go:89] found id: "c3857941ca2aae221674eea310f456831e8d058f682132b671e62d0c96c1fc17"
	I1217 11:55:47.730694 1977694 cri.go:89] found id: "1d2b1bb8a1b76843007cd338cd29ad6ab7ffd7691330930addf1432fa7421ec5"
	I1217 11:55:47.730700 1977694 cri.go:89] found id: "dfa862cc6c124cbff58725fd6b60cb1a8b9eefcaf56e3fc283931533b497b6f9"
	I1217 11:55:47.730705 1977694 cri.go:89] found id: "8e9c5260713310721b633c55bf538fd5250281666a4f79e7afb0e39f48e8752a"
	I1217 11:55:47.730709 1977694 cri.go:89] found id: "59ffeef8ed7039998fb2d90ffdb8f586577c7fac1aeca5d33293a0883dcf6fe1"
	I1217 11:55:47.730714 1977694 cri.go:89] found id: "2927eecd91f4b36c104d665f79cbb47dbc7e16d7f360c6a4e4e977b70d7eaf43"
	I1217 11:55:47.730722 1977694 cri.go:89] found id: "0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee"
	I1217 11:55:47.730728 1977694 cri.go:89] found id: "fd3aeebcdab235840cddfdf9ae02671bf3de7091045cb6660338a7cb39e126c4"
	I1217 11:55:47.730731 1977694 cri.go:89] found id: ""
	I1217 11:55:47.730770 1977694 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:55:47.743355 1977694 retry.go:31] will retry after 330.575924ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:47Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:55:48.074696 1977694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:48.089017 1977694 pause.go:52] kubelet running: false
	I1217 11:55:48.089085 1977694 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:55:48.237525 1977694 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:55:48.237654 1977694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:55:48.305865 1977694 cri.go:89] found id: "1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4"
	I1217 11:55:48.305900 1977694 cri.go:89] found id: "8c0fc19eb2c75cce822364a4b57ad3d996f36be504d773da1f4a3833e438910b"
	I1217 11:55:48.305906 1977694 cri.go:89] found id: "e366a6880a7038192225e1a0e3f1dfae39b7b0e063b30315983cee12d05f0372"
	I1217 11:55:48.305911 1977694 cri.go:89] found id: "c3857941ca2aae221674eea310f456831e8d058f682132b671e62d0c96c1fc17"
	I1217 11:55:48.305915 1977694 cri.go:89] found id: "1d2b1bb8a1b76843007cd338cd29ad6ab7ffd7691330930addf1432fa7421ec5"
	I1217 11:55:48.305923 1977694 cri.go:89] found id: "dfa862cc6c124cbff58725fd6b60cb1a8b9eefcaf56e3fc283931533b497b6f9"
	I1217 11:55:48.305928 1977694 cri.go:89] found id: "8e9c5260713310721b633c55bf538fd5250281666a4f79e7afb0e39f48e8752a"
	I1217 11:55:48.305931 1977694 cri.go:89] found id: "59ffeef8ed7039998fb2d90ffdb8f586577c7fac1aeca5d33293a0883dcf6fe1"
	I1217 11:55:48.305936 1977694 cri.go:89] found id: "2927eecd91f4b36c104d665f79cbb47dbc7e16d7f360c6a4e4e977b70d7eaf43"
	I1217 11:55:48.305951 1977694 cri.go:89] found id: "0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee"
	I1217 11:55:48.305960 1977694 cri.go:89] found id: "fd3aeebcdab235840cddfdf9ae02671bf3de7091045cb6660338a7cb39e126c4"
	I1217 11:55:48.305965 1977694 cri.go:89] found id: ""
	I1217 11:55:48.306008 1977694 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:55:48.320657 1977694 out.go:203] 
	W1217 11:55:48.322120 1977694 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:55:48.322140 1977694 out.go:285] * 
	* 
	W1217 11:55:48.328989 1977694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:55:48.330404 1977694 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-737478 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-737478
helpers_test.go:244: (dbg) docker inspect no-preload-737478:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87",
	        "Created": "2025-12-17T11:53:25.367483082Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1963584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:54:43.619296736Z",
	            "FinishedAt": "2025-12-17T11:54:42.574645746Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/hosts",
	        "LogPath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87-json.log",
	        "Name": "/no-preload-737478",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-737478:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-737478",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87",
	                "LowerDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276/merged",
	                "UpperDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276/diff",
	                "WorkDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-737478",
	                "Source": "/var/lib/docker/volumes/no-preload-737478/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-737478",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-737478",
	                "name.minikube.sigs.k8s.io": "no-preload-737478",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ecd3d650b18eca76e5e9112152a3f611510afe86c894d4ff96750ad4b561baad",
	            "SandboxKey": "/var/run/docker/netns/ecd3d650b18e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34626"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34627"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34630"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34628"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34629"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-737478": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c30ab0942ebedfa9daed9e159e1243b5098ef936ff9c2403568c9e33b8451ef1",
	                    "EndpointID": "f8a92ccaf8dc9721c76c142f663c191f178822397dec435fa963baf9d95daacb",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ca:b5:7b:3a:ef:ab",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-737478",
	                        "7dea84a847e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737478 -n no-preload-737478
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737478 -n no-preload-737478: exit status 2 (341.927006ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-737478 logs -n 25
E1217 11:55:49.032645 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-737478 logs -n 25: (1.152271931s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p no-preload-737478 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p newest-cni-601829 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-601829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p embed-certs-542273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p embed-certs-542273 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-737478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ image   │ newest-cni-601829 image list --format=json                                                                                                                                                                                                         │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ pause   │ -p newest-cni-601829 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-382022 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ delete  │ -p newest-cni-601829                                                                                                                                                                                                                               │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ delete  │ -p newest-cni-601829                                                                                                                                                                                                                               │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable dashboard -p embed-certs-542273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p auto-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                            │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 pgrep -a kubelet                                                                                                                                                                                                                    │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ image   │ no-preload-737478 image list --format=json                                                                                                                                                                                                         │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ pause   │ -p no-preload-737478 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:55:05
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:55:05.915015 1972864 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:55:05.915174 1972864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:05.915182 1972864 out.go:374] Setting ErrFile to fd 2...
	I1217 11:55:05.915188 1972864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:05.915474 1972864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:55:05.916077 1972864 out.go:368] Setting JSON to false
	I1217 11:55:05.917928 1972864 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20251,"bootTime":1765952255,"procs":433,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:55:05.918012 1972864 start.go:143] virtualization: kvm guest
	I1217 11:55:05.920036 1972864 out.go:179] * [default-k8s-diff-port-382022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:55:05.921753 1972864 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:55:05.921776 1972864 notify.go:221] Checking for updates...
	I1217 11:55:05.924500 1972864 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:55:05.926029 1972864 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:05.927481 1972864 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:55:05.928660 1972864 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:55:05.930205 1972864 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:55:05.932089 1972864 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:05.932942 1972864 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:55:05.966016 1972864 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:55:05.966214 1972864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:06.051196 1972864 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:55:06.035134766 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:55:06.051362 1972864 docker.go:319] overlay module found
	I1217 11:55:06.053307 1972864 out.go:179] * Using the docker driver based on existing profile
	I1217 11:55:06.055121 1972864 start.go:309] selected driver: docker
	I1217 11:55:06.055187 1972864 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:06.055310 1972864 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:55:06.056083 1972864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:06.137330 1972864 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:55:06.123341974 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:55:06.137759 1972864 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:06.137803 1972864 cni.go:84] Creating CNI manager for ""
	I1217 11:55:06.137872 1972864 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:55:06.137919 1972864 start.go:353] cluster config:
	{Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:05.330562 1968420 node_ready.go:49] node "embed-certs-542273" is "Ready"
	I1217 11:55:05.330602 1968420 node_ready.go:38] duration metric: took 2.307590665s for node "embed-certs-542273" to be "Ready" ...
	I1217 11:55:05.330621 1968420 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:55:05.330685 1968420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:55:06.139050 1968420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.109697945s)
	I1217 11:55:06.139117 1968420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.099340031s)
	I1217 11:55:06.139286 1968420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.913743017s)
	I1217 11:55:06.139473 1972864 out.go:179] * Starting "default-k8s-diff-port-382022" primary control-plane node in "default-k8s-diff-port-382022" cluster
	I1217 11:55:06.139346 1968420 api_server.go:72] duration metric: took 3.348259497s to wait for apiserver process to appear ...
	I1217 11:55:06.139489 1968420 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:55:06.139509 1968420 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 11:55:06.140802 1968420 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-542273 addons enable metrics-server
	
	I1217 11:55:06.140797 1972864 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:55:06.141926 1972864 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:55:06.144623 1968420 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 11:55:06.144646 1968420 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 11:55:06.155204 1968420 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 11:55:06.143128 1972864 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:55:06.143168 1972864 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:55:06.143181 1972864 cache.go:65] Caching tarball of preloaded images
	I1217 11:55:06.143222 1972864 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:55:06.143289 1972864 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:55:06.143302 1972864 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 11:55:06.143455 1972864 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/config.json ...
	I1217 11:55:06.170086 1972864 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:55:06.170124 1972864 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:55:06.170144 1972864 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:55:06.170183 1972864 start.go:360] acquireMachinesLock for default-k8s-diff-port-382022: {Name:mkc3ede9873fa3c6fdab76bd3c88723bee4b3785 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:06.170258 1972864 start.go:364] duration metric: took 50.675µs to acquireMachinesLock for "default-k8s-diff-port-382022"
	I1217 11:55:06.170281 1972864 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:55:06.170291 1972864 fix.go:54] fixHost starting: 
	I1217 11:55:06.170622 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:06.191065 1972864 fix.go:112] recreateIfNeeded on default-k8s-diff-port-382022: state=Stopped err=<nil>
	W1217 11:55:06.191102 1972864 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 11:55:05.010946 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:07.509563 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	I1217 11:55:06.156501 1968420 addons.go:530] duration metric: took 3.364667334s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 11:55:06.639611 1968420 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 11:55:06.645152 1968420 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 11:55:06.646332 1968420 api_server.go:141] control plane version: v1.34.3
	I1217 11:55:06.646360 1968420 api_server.go:131] duration metric: took 506.863143ms to wait for apiserver health ...
	I1217 11:55:06.646370 1968420 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:55:06.653410 1968420 system_pods.go:59] 8 kube-system pods found
	I1217 11:55:06.653442 1968420 system_pods.go:61] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:06.653453 1968420 system_pods.go:61] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:06.653461 1968420 system_pods.go:61] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 11:55:06.653469 1968420 system_pods.go:61] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:06.653477 1968420 system_pods.go:61] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:06.653484 1968420 system_pods.go:61] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:06.653496 1968420 system_pods.go:61] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:06.653502 1968420 system_pods.go:61] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:06.653514 1968420 system_pods.go:74] duration metric: took 7.137942ms to wait for pod list to return data ...
	I1217 11:55:06.653523 1968420 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:55:06.656046 1968420 default_sa.go:45] found service account: "default"
	I1217 11:55:06.656064 1968420 default_sa.go:55] duration metric: took 2.535516ms for default service account to be created ...
	I1217 11:55:06.656073 1968420 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:55:06.658845 1968420 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:06.658872 1968420 system_pods.go:89] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:06.658881 1968420 system_pods.go:89] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:06.658890 1968420 system_pods.go:89] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 11:55:06.658898 1968420 system_pods.go:89] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:06.658912 1968420 system_pods.go:89] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:06.658920 1968420 system_pods.go:89] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:06.658936 1968420 system_pods.go:89] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:06.658943 1968420 system_pods.go:89] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:06.658952 1968420 system_pods.go:126] duration metric: took 2.874094ms to wait for k8s-apps to be running ...
	I1217 11:55:06.658961 1968420 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:55:06.659011 1968420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:06.674841 1968420 system_svc.go:56] duration metric: took 15.867236ms WaitForService to wait for kubelet
	I1217 11:55:06.674874 1968420 kubeadm.go:587] duration metric: took 3.883790125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:06.674896 1968420 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:55:06.679469 1968420 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:55:06.679504 1968420 node_conditions.go:123] node cpu capacity is 8
	I1217 11:55:06.679524 1968420 node_conditions.go:105] duration metric: took 4.620965ms to run NodePressure ...
	I1217 11:55:06.679551 1968420 start.go:242] waiting for startup goroutines ...
	I1217 11:55:06.679561 1968420 start.go:247] waiting for cluster config update ...
	I1217 11:55:06.679575 1968420 start.go:256] writing updated cluster config ...
	I1217 11:55:06.679934 1968420 ssh_runner.go:195] Run: rm -f paused
	I1217 11:55:06.685757 1968420 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:06.690580 1968420 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t66bd" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 11:55:08.696479 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:05.221867 1968426 out.go:252]   - Generating certificates and keys ...
	I1217 11:55:05.222013 1968426 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:55:05.222143 1968426 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:55:05.515027 1968426 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:55:05.840693 1968426 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:55:06.051969 1968426 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:55:06.488194 1968426 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:55:07.147959 1968426 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:55:07.148173 1968426 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-213935 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:55:07.452899 1968426 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:55:07.453095 1968426 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-213935 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:55:07.556891 1968426 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:55:07.863151 1968426 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:55:07.920730 1968426 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:55:07.920839 1968426 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:55:08.231818 1968426 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:55:08.551353 1968426 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:55:08.710825 1968426 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:55:08.929825 1968426 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:55:09.189615 1968426 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:55:09.190223 1968426 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:55:09.194170 1968426 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:55:06.193174 1972864 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-382022" ...
	I1217 11:55:06.193264 1972864 cli_runner.go:164] Run: docker start default-k8s-diff-port-382022
	I1217 11:55:06.526174 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:06.552214 1972864 kic.go:430] container "default-k8s-diff-port-382022" state is running.
	I1217 11:55:06.552760 1972864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:55:06.577698 1972864 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/config.json ...
	I1217 11:55:06.577964 1972864 machine.go:94] provisionDockerMachine start ...
	I1217 11:55:06.578041 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:06.598700 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:06.599024 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:06.599042 1972864 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:55:06.599663 1972864 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45878->127.0.0.1:34641: read: connection reset by peer
	I1217 11:55:09.755152 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382022
	
	I1217 11:55:09.755203 1972864 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-382022"
	I1217 11:55:09.755274 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:09.782521 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:09.782860 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:09.782881 1972864 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-382022 && echo "default-k8s-diff-port-382022" | sudo tee /etc/hostname
	I1217 11:55:09.951044 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382022
	
	I1217 11:55:09.951162 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:09.977929 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:09.978252 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:09.978284 1972864 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-382022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-382022/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-382022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:55:10.136717 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:55:10.136752 1972864 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:55:10.136778 1972864 ubuntu.go:190] setting up certificates
	I1217 11:55:10.136791 1972864 provision.go:84] configureAuth start
	I1217 11:55:10.136861 1972864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:55:10.163824 1972864 provision.go:143] copyHostCerts
	I1217 11:55:10.163910 1972864 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:55:10.163932 1972864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:55:10.164006 1972864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:55:10.164229 1972864 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:55:10.164249 1972864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:55:10.164313 1972864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:55:10.164476 1972864 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:55:10.164491 1972864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:55:10.164547 1972864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:55:10.164663 1972864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-382022 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-382022 localhost minikube]
	I1217 11:55:10.370953 1972864 provision.go:177] copyRemoteCerts
	I1217 11:55:10.371025 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:55:10.371104 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:10.396183 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:10.499250 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:55:10.523707 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 11:55:10.547625 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:55:10.573466 1972864 provision.go:87] duration metric: took 436.653063ms to configureAuth
	I1217 11:55:10.573501 1972864 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:55:10.573749 1972864 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:10.573882 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:10.597313 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:10.597651 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:10.597694 1972864 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:55:11.011460 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:55:11.011491 1972864 machine.go:97] duration metric: took 4.43350855s to provisionDockerMachine
	I1217 11:55:11.011507 1972864 start.go:293] postStartSetup for "default-k8s-diff-port-382022" (driver="docker")
	I1217 11:55:11.011519 1972864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:55:11.011621 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:55:11.011686 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:11.034079 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:11.141182 1972864 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:55:11.145913 1972864 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:55:11.145947 1972864 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:55:11.145962 1972864 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:55:11.146017 1972864 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:55:11.146109 1972864 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:55:11.146199 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:55:11.157064 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:55:11.177488 1972864 start.go:296] duration metric: took 165.962986ms for postStartSetup
	I1217 11:55:11.177607 1972864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:55:11.177653 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:11.204846 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:11.308658 1972864 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:55:11.315100 1972864 fix.go:56] duration metric: took 5.144801074s for fixHost
	I1217 11:55:11.315129 1972864 start.go:83] releasing machines lock for "default-k8s-diff-port-382022", held for 5.144858234s
	I1217 11:55:11.315199 1972864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:55:11.338745 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:55:11.338818 1972864 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:55:11.338829 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:55:11.338879 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:55:11.338917 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:55:11.338953 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:55:11.339012 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:55:11.339099 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:55:11.339164 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:11.365430 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:11.495096 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:55:11.525235 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:55:11.552785 1972864 ssh_runner.go:195] Run: openssl version
	I1217 11:55:11.562436 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.573636 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:55:11.584676 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.589300 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.589361 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.641338 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:55:11.652096 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.661866 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:55:11.674821 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.680734 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.680803 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.729519 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:55:11.740678 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.750972 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:55:11.760660 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.766240 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.766330 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.818447 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:55:11.829562 1972864 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:55:11.834749 1972864 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:55:11.840877 1972864 ssh_runner.go:195] Run: cat /version.json
	I1217 11:55:11.840989 1972864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:55:11.929334 1972864 ssh_runner.go:195] Run: systemctl --version
	I1217 11:55:11.937487 1972864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:55:11.993284 1972864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:55:11.999774 1972864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:55:11.999914 1972864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:55:12.011051 1972864 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 11:55:12.011078 1972864 start.go:496] detecting cgroup driver to use...
	I1217 11:55:12.011113 1972864 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:55:12.011160 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:55:12.031108 1972864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:55:12.049252 1972864 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:55:12.049318 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:55:12.069726 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:55:12.085078 1972864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:55:12.209963 1972864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:55:12.316465 1972864 docker.go:234] disabling docker service ...
	I1217 11:55:12.316548 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:55:12.333455 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:55:12.348978 1972864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:55:12.457995 1972864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:55:12.596548 1972864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:55:12.612573 1972864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:55:12.628307 1972864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:55:12.628394 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.646387 1972864 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:55:12.646626 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.694485 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.704956 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.714913 1972864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:55:12.723885 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.734227 1972864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.746830 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.760588 1972864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:55:12.773376 1972864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:55:12.783526 1972864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:12.888361 1972864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:55:13.205918 1972864 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:55:13.205985 1972864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:55:13.210225 1972864 start.go:564] Will wait 60s for crictl version
	I1217 11:55:13.210287 1972864 ssh_runner.go:195] Run: which crictl
	I1217 11:55:13.214055 1972864 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:55:13.241923 1972864 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:55:13.242004 1972864 ssh_runner.go:195] Run: crio --version
	I1217 11:55:13.272236 1972864 ssh_runner.go:195] Run: crio --version
	I1217 11:55:13.311001 1972864 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1217 11:55:09.509773 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:11.511479 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:11.202390 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:13.702094 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:09.198065 1968426 out.go:252]   - Booting up control plane ...
	I1217 11:55:09.198177 1968426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:55:09.198291 1968426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:55:09.198350 1968426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:55:09.212059 1968426 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:55:09.212187 1968426 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:55:09.221646 1968426 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:55:09.222066 1968426 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:55:09.222112 1968426 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:55:09.330911 1968426 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:55:09.331064 1968426 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:55:10.335948 1968426 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0019159s
	I1217 11:55:10.337319 1968426 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 11:55:10.337604 1968426 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1217 11:55:10.337743 1968426 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 11:55:10.337845 1968426 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 11:55:12.778929 1968426 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.441400896s
	I1217 11:55:12.814618 1968426 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.476747339s
	I1217 11:55:13.313210 1972864 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-382022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:55:13.337041 1972864 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:55:13.342732 1972864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:55:13.357159 1972864 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:55:13.357335 1972864 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:55:13.357405 1972864 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:55:13.401031 1972864 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:55:13.401061 1972864 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:55:13.401124 1972864 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:55:13.435767 1972864 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:55:13.435795 1972864 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:55:13.435805 1972864 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.3 crio true true} ...
	I1217 11:55:13.435950 1972864 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-382022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:55:13.436036 1972864 ssh_runner.go:195] Run: crio config
	I1217 11:55:13.501779 1972864 cni.go:84] Creating CNI manager for ""
	I1217 11:55:13.501805 1972864 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:55:13.501824 1972864 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:55:13.501855 1972864 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-382022 NodeName:default-k8s-diff-port-382022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:55:13.502039 1972864 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-382022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:55:13.502129 1972864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:55:13.513932 1972864 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:55:13.514003 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:55:13.524819 1972864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 11:55:13.541119 1972864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:55:13.557811 1972864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 11:55:13.576185 1972864 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:55:13.581146 1972864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:55:13.594763 1972864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:13.713132 1972864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:55:13.740075 1972864 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022 for IP: 192.168.76.2
	I1217 11:55:13.740104 1972864 certs.go:195] generating shared ca certs ...
	I1217 11:55:13.740126 1972864 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:13.740330 1972864 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:55:13.740393 1972864 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:55:13.740406 1972864 certs.go:257] generating profile certs ...
	I1217 11:55:13.740497 1972864 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.key
	I1217 11:55:13.740635 1972864 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a
	I1217 11:55:13.740721 1972864 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key
	I1217 11:55:13.740846 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:55:13.740880 1972864 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:55:13.740887 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:55:13.740911 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:55:13.740934 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:55:13.740955 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:55:13.740993 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:55:13.741867 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:55:13.773747 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:55:13.804586 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:55:13.834707 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:55:13.869625 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 11:55:13.898355 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:55:13.922845 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:55:13.947273 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 11:55:13.972061 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:55:14.001446 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:55:14.027589 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:55:14.054132 1972864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:55:14.073337 1972864 ssh_runner.go:195] Run: openssl version
	I1217 11:55:14.082156 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.092983 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:55:14.103945 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.109736 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.109811 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.172322 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:55:14.182999 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.194351 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:55:14.210464 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.216214 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.216730 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.276197 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:55:14.287490 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.299145 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:55:14.312656 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.319064 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.319132 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.383567 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:55:14.400321 1972864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:55:14.410392 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 11:55:14.479493 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 11:55:14.544631 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 11:55:14.604881 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 11:55:14.664836 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 11:55:14.723985 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 11:55:14.789590 1972864 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:14.789736 1972864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:55:14.789811 1972864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:55:14.835987 1972864 cri.go:89] found id: "8a177f28a91aaa2beb33f612bda7e08cb55f517dc85cb28db4600fd97f28c910"
	I1217 11:55:14.836014 1972864 cri.go:89] found id: "b89ae3816c4a84a75d80384f2ac0ba58aaba5961009d2b0e4689a33fd8bee8c7"
	I1217 11:55:14.836031 1972864 cri.go:89] found id: "7b920b07dddb55c17343ecbdc9f777396c3b3e9c983a17164746d7f9865e23b0"
	I1217 11:55:14.836036 1972864 cri.go:89] found id: "6133fb2263ed69eedfc718e57501b70033d65802ca78d796131ff5830a512466"
	I1217 11:55:14.836040 1972864 cri.go:89] found id: ""
	I1217 11:55:14.836091 1972864 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 11:55:14.856976 1972864 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:14Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:55:14.857081 1972864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:55:14.871120 1972864 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 11:55:14.871143 1972864 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 11:55:14.871281 1972864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 11:55:14.883106 1972864 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:55:14.884356 1972864 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-382022" does not appear in /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:14.885194 1972864 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-1669348/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-382022" cluster setting kubeconfig missing "default-k8s-diff-port-382022" context setting]
	I1217 11:55:14.886458 1972864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:14.889149 1972864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 11:55:14.903999 1972864 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 11:55:14.904040 1972864 kubeadm.go:602] duration metric: took 32.890057ms to restartPrimaryControlPlane
	I1217 11:55:14.904052 1972864 kubeadm.go:403] duration metric: took 114.480546ms to StartCluster
	I1217 11:55:14.904073 1972864 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:14.904147 1972864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:14.906555 1972864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:14.907164 1972864 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:14.907249 1972864 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:55:14.907415 1972864 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:55:14.907508 1972864 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-382022"
	I1217 11:55:14.907527 1972864 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-382022"
	W1217 11:55:14.907570 1972864 addons.go:248] addon storage-provisioner should already be in state true
	I1217 11:55:14.907602 1972864 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:55:14.908114 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.908339 1972864 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-382022"
	I1217 11:55:14.908366 1972864 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-382022"
	W1217 11:55:14.908413 1972864 addons.go:248] addon dashboard should already be in state true
	I1217 11:55:14.908464 1972864 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:55:14.908988 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.909258 1972864 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-382022"
	I1217 11:55:14.909280 1972864 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-382022"
	I1217 11:55:14.909613 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.912097 1972864 out.go:179] * Verifying Kubernetes components...
	I1217 11:55:14.838768 1968426 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501405436s
	I1217 11:55:14.866719 1968426 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:55:14.879181 1968426 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:55:14.897113 1968426 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:55:14.897628 1968426 kubeadm.go:319] [mark-control-plane] Marking the node auto-213935 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:55:14.933522 1968426 kubeadm.go:319] [bootstrap-token] Using token: xj4v1d.49m4e5gs1ckj0agu
	I1217 11:55:14.914078 1972864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:14.941643 1972864 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 11:55:14.941685 1972864 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:55:14.943971 1972864 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:14.943992 1972864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:55:14.944059 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:14.944144 1972864 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 11:55:14.944150 1972864 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-382022"
	W1217 11:55:14.944168 1972864 addons.go:248] addon default-storageclass should already be in state true
	I1217 11:55:14.944231 1972864 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:55:14.944732 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.945181 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 11:55:14.945206 1972864 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 11:55:14.945256 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:14.985758 1972864 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:14.985961 1972864 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:55:14.986156 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:14.989510 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:14.991637 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:15.024797 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:15.126636 1972864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:55:15.136146 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 11:55:15.136172 1972864 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 11:55:15.137704 1972864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:15.145951 1972864 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-382022" to be "Ready" ...
	I1217 11:55:15.164090 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 11:55:15.164119 1972864 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 11:55:15.173306 1972864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:15.187105 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 11:55:15.187135 1972864 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 11:55:15.211988 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 11:55:15.212013 1972864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 11:55:15.235388 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 11:55:15.235420 1972864 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 11:55:15.261315 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 11:55:15.261346 1972864 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 11:55:15.282989 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 11:55:15.283043 1972864 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 11:55:15.310716 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 11:55:15.310761 1972864 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 11:55:15.336860 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:55:15.336898 1972864 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 11:55:15.359900 1972864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:55:14.936461 1968426 out.go:252]   - Configuring RBAC rules ...
	I1217 11:55:14.936651 1968426 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:55:14.943579 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:55:14.957528 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:55:14.965463 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:55:14.973724 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:55:14.980763 1968426 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:55:15.247181 1968426 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:55:15.680248 1968426 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:55:16.249887 1968426 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:55:16.251461 1968426 kubeadm.go:319] 
	I1217 11:55:16.252502 1968426 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:55:16.252519 1968426 kubeadm.go:319] 
	I1217 11:55:16.252621 1968426 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:55:16.252626 1968426 kubeadm.go:319] 
	I1217 11:55:16.252655 1968426 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:55:16.252727 1968426 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:55:16.252784 1968426 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:55:16.252789 1968426 kubeadm.go:319] 
	I1217 11:55:16.252861 1968426 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:55:16.252866 1968426 kubeadm.go:319] 
	I1217 11:55:16.252924 1968426 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:55:16.252929 1968426 kubeadm.go:319] 
	I1217 11:55:16.252987 1968426 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:55:16.253073 1968426 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:55:16.253152 1968426 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:55:16.253156 1968426 kubeadm.go:319] 
	I1217 11:55:16.253254 1968426 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:55:16.253341 1968426 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:55:16.253346 1968426 kubeadm.go:319] 
	I1217 11:55:16.253707 1968426 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xj4v1d.49m4e5gs1ckj0agu \
	I1217 11:55:16.253911 1968426 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:55:16.253969 1968426 kubeadm.go:319] 	--control-plane 
	I1217 11:55:16.253985 1968426 kubeadm.go:319] 
	I1217 11:55:16.254174 1968426 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:55:16.254236 1968426 kubeadm.go:319] 
	I1217 11:55:16.254382 1968426 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xj4v1d.49m4e5gs1ckj0agu \
	I1217 11:55:16.254589 1968426 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:55:16.256944 1968426 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:55:16.257103 1968426 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:55:16.257146 1968426 cni.go:84] Creating CNI manager for ""
	I1217 11:55:16.257166 1968426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:55:16.261095 1968426 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 11:55:16.649970 1972864 node_ready.go:49] node "default-k8s-diff-port-382022" is "Ready"
	I1217 11:55:16.650012 1972864 node_ready.go:38] duration metric: took 1.504027618s for node "default-k8s-diff-port-382022" to be "Ready" ...
	I1217 11:55:16.650047 1972864 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:55:16.650124 1972864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:55:17.435218 1972864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.297480989s)
	I1217 11:55:17.435331 1972864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.261978145s)
	I1217 11:55:17.435439 1972864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.075506487s)
	I1217 11:55:17.435502 1972864 api_server.go:72] duration metric: took 2.528210759s to wait for apiserver process to appear ...
	I1217 11:55:17.435674 1972864 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:55:17.435729 1972864 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 11:55:17.437001 1972864 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-382022 addons enable metrics-server
	
	I1217 11:55:17.440773 1972864 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 11:55:17.440809 1972864 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 11:55:17.443120 1972864 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1217 11:55:14.010760 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:16.511895 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:15.703398 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:18.196308 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:16.262819 1968426 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:55:16.269037 1968426 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 11:55:16.269068 1968426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:55:16.288816 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:55:16.641451 1968426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:55:16.641740 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:16.641765 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-213935 minikube.k8s.io/updated_at=2025_12_17T11_55_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=auto-213935 minikube.k8s.io/primary=true
	I1217 11:55:16.666162 1968426 ops.go:34] apiserver oom_adj: -16
	I1217 11:55:16.779798 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:17.280793 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:17.780773 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:18.280818 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:18.780724 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:19.280002 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:19.780707 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:20.279886 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:20.780546 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:20.849820 1968426 kubeadm.go:1114] duration metric: took 4.208149599s to wait for elevateKubeSystemPrivileges
	I1217 11:55:20.849881 1968426 kubeadm.go:403] duration metric: took 16.084919874s to StartCluster
	I1217 11:55:20.849907 1968426 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:20.849987 1968426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:20.852845 1968426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:20.853190 1968426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:55:20.853184 1968426 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:55:20.853505 1968426 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:55:20.853715 1968426 addons.go:70] Setting storage-provisioner=true in profile "auto-213935"
	I1217 11:55:20.853737 1968426 addons.go:239] Setting addon storage-provisioner=true in "auto-213935"
	I1217 11:55:20.853821 1968426 addons.go:70] Setting default-storageclass=true in profile "auto-213935"
	I1217 11:55:20.853834 1968426 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-213935"
	I1217 11:55:20.853900 1968426 host.go:66] Checking if "auto-213935" exists ...
	I1217 11:55:20.854788 1968426 cli_runner.go:164] Run: docker container inspect auto-213935 --format={{.State.Status}}
	I1217 11:55:20.854908 1968426 config.go:182] Loaded profile config "auto-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:20.855253 1968426 cli_runner.go:164] Run: docker container inspect auto-213935 --format={{.State.Status}}
	I1217 11:55:20.856383 1968426 out.go:179] * Verifying Kubernetes components...
	I1217 11:55:20.859959 1968426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:20.885192 1968426 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:55:17.444238 1972864 addons.go:530] duration metric: took 2.53682973s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 11:55:17.936751 1972864 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 11:55:17.941896 1972864 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1217 11:55:17.943002 1972864 api_server.go:141] control plane version: v1.34.3
	I1217 11:55:17.943033 1972864 api_server.go:131] duration metric: took 507.348152ms to wait for apiserver health ...
	I1217 11:55:17.943044 1972864 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:55:17.946080 1972864 system_pods.go:59] 8 kube-system pods found
	I1217 11:55:17.946129 1972864 system_pods.go:61] "coredns-66bc5c9577-8nz5c" [7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:17.946148 1972864 system_pods.go:61] "etcd-default-k8s-diff-port-382022" [89624998-9d7a-46d1-bb95-95d799e1f333] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:17.946156 1972864 system_pods.go:61] "kindnet-lsrk2" [59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c] Running
	I1217 11:55:17.946161 1972864 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382022" [006f19c1-f459-4182-9d8f-2eade0c6c10e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:17.946167 1972864 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382022" [ace736c2-f536-44c9-9bab-69c24a0714c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:17.946178 1972864 system_pods.go:61] "kube-proxy-ss2p8" [d7f7db01-8945-4a8f-aa14-c6f50ac56824] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:17.946186 1972864 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382022" [703d3040-1a85-4a71-a17e-5043245475fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:17.946195 1972864 system_pods.go:61] "storage-provisioner" [973e9e2c-a15b-4a45-8d2f-955f94325749] Running
	I1217 11:55:17.946206 1972864 system_pods.go:74] duration metric: took 3.151523ms to wait for pod list to return data ...
	I1217 11:55:17.946218 1972864 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:55:17.948580 1972864 default_sa.go:45] found service account: "default"
	I1217 11:55:17.948602 1972864 default_sa.go:55] duration metric: took 2.373002ms for default service account to be created ...
	I1217 11:55:17.948613 1972864 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:55:17.951056 1972864 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:17.951085 1972864 system_pods.go:89] "coredns-66bc5c9577-8nz5c" [7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:17.951094 1972864 system_pods.go:89] "etcd-default-k8s-diff-port-382022" [89624998-9d7a-46d1-bb95-95d799e1f333] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:17.951103 1972864 system_pods.go:89] "kindnet-lsrk2" [59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c] Running
	I1217 11:55:17.951109 1972864 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-382022" [006f19c1-f459-4182-9d8f-2eade0c6c10e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:17.951118 1972864 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-382022" [ace736c2-f536-44c9-9bab-69c24a0714c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:17.951124 1972864 system_pods.go:89] "kube-proxy-ss2p8" [d7f7db01-8945-4a8f-aa14-c6f50ac56824] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:17.951132 1972864 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-382022" [703d3040-1a85-4a71-a17e-5043245475fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:17.951136 1972864 system_pods.go:89] "storage-provisioner" [973e9e2c-a15b-4a45-8d2f-955f94325749] Running
	I1217 11:55:17.951143 1972864 system_pods.go:126] duration metric: took 2.523832ms to wait for k8s-apps to be running ...
	I1217 11:55:17.951158 1972864 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:55:17.951204 1972864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:17.978730 1972864 system_svc.go:56] duration metric: took 27.563396ms WaitForService to wait for kubelet
	I1217 11:55:17.978769 1972864 kubeadm.go:587] duration metric: took 3.071477819s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:17.978809 1972864 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:55:17.981772 1972864 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:55:17.981811 1972864 node_conditions.go:123] node cpu capacity is 8
	I1217 11:55:17.981831 1972864 node_conditions.go:105] duration metric: took 3.01527ms to run NodePressure ...
	I1217 11:55:17.981847 1972864 start.go:242] waiting for startup goroutines ...
	I1217 11:55:17.981857 1972864 start.go:247] waiting for cluster config update ...
	I1217 11:55:17.981873 1972864 start.go:256] writing updated cluster config ...
	I1217 11:55:17.982150 1972864 ssh_runner.go:195] Run: rm -f paused
	I1217 11:55:17.986348 1972864 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:17.990447 1972864 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8nz5c" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 11:55:19.996696 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	I1217 11:55:20.887329 1968426 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:20.887356 1968426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:55:20.887434 1968426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-213935
	I1217 11:55:20.887839 1968426 addons.go:239] Setting addon default-storageclass=true in "auto-213935"
	I1217 11:55:20.887900 1968426 host.go:66] Checking if "auto-213935" exists ...
	I1217 11:55:20.888916 1968426 cli_runner.go:164] Run: docker container inspect auto-213935 --format={{.State.Status}}
	I1217 11:55:20.915669 1968426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34636 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/auto-213935/id_rsa Username:docker}
	I1217 11:55:20.916618 1968426 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:20.916642 1968426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:55:20.916694 1968426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-213935
	I1217 11:55:20.949341 1968426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34636 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/auto-213935/id_rsa Username:docker}
	I1217 11:55:20.973408 1968426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:55:21.038628 1968426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:55:21.042668 1968426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:21.071913 1968426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:21.172319 1968426 start.go:1013] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 11:55:21.173658 1968426 node_ready.go:35] waiting up to 15m0s for node "auto-213935" to be "Ready" ...
	I1217 11:55:21.361677 1968426 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 11:55:19.008895 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:21.011708 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:23.012487 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:20.197307 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:22.697565 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:21.363154 1968426 addons.go:530] duration metric: took 509.647318ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 11:55:21.676506 1968426 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-213935" context rescaled to 1 replicas
	W1217 11:55:23.177425 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:21.997728 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:24.498078 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:25.510927 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:28.009327 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:25.198433 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:27.696164 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:25.178008 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:27.677094 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:26.997192 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:29.496072 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:30.009958 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:32.508570 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:30.196632 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:32.696852 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:29.677249 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:32.177004 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	I1217 11:55:33.510207 1963245 pod_ready.go:94] pod "coredns-7d764666f9-n2kvr" is "Ready"
	I1217 11:55:33.510240 1963245 pod_ready.go:86] duration metric: took 39.006915253s for pod "coredns-7d764666f9-n2kvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.513226 1963245 pod_ready.go:83] waiting for pod "etcd-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.518013 1963245 pod_ready.go:94] pod "etcd-no-preload-737478" is "Ready"
	I1217 11:55:33.518042 1963245 pod_ready.go:86] duration metric: took 4.791962ms for pod "etcd-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.520439 1963245 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.525019 1963245 pod_ready.go:94] pod "kube-apiserver-no-preload-737478" is "Ready"
	I1217 11:55:33.525042 1963245 pod_ready.go:86] duration metric: took 4.576574ms for pod "kube-apiserver-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.527093 1963245 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.707246 1963245 pod_ready.go:94] pod "kube-controller-manager-no-preload-737478" is "Ready"
	I1217 11:55:33.707289 1963245 pod_ready.go:86] duration metric: took 180.171414ms for pod "kube-controller-manager-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.908251 1963245 pod_ready.go:83] waiting for pod "kube-proxy-5tkm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.308267 1963245 pod_ready.go:94] pod "kube-proxy-5tkm8" is "Ready"
	I1217 11:55:34.308294 1963245 pod_ready.go:86] duration metric: took 400.014798ms for pod "kube-proxy-5tkm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.508400 1963245 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.907605 1963245 pod_ready.go:94] pod "kube-scheduler-no-preload-737478" is "Ready"
	I1217 11:55:34.907635 1963245 pod_ready.go:86] duration metric: took 399.204157ms for pod "kube-scheduler-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.907651 1963245 pod_ready.go:40] duration metric: took 40.409789961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:34.953713 1963245 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 11:55:34.955479 1963245 out.go:179] * Done! kubectl is now configured to use "no-preload-737478" cluster and "default" namespace by default
	W1217 11:55:31.496442 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:33.497214 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	I1217 11:55:34.176846 1968426 node_ready.go:49] node "auto-213935" is "Ready"
	I1217 11:55:34.176882 1968426 node_ready.go:38] duration metric: took 13.003193123s for node "auto-213935" to be "Ready" ...
	I1217 11:55:34.176901 1968426 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:55:34.176959 1968426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:55:34.189526 1968426 api_server.go:72] duration metric: took 13.336301645s to wait for apiserver process to appear ...
	I1217 11:55:34.189581 1968426 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:55:34.189603 1968426 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:55:34.194353 1968426 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 11:55:34.195733 1968426 api_server.go:141] control plane version: v1.34.3
	I1217 11:55:34.195766 1968426 api_server.go:131] duration metric: took 6.176107ms to wait for apiserver health ...
	I1217 11:55:34.195777 1968426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:55:34.199311 1968426 system_pods.go:59] 8 kube-system pods found
	I1217 11:55:34.199371 1968426 system_pods.go:61] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:34.199377 1968426 system_pods.go:61] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.199384 1968426 system_pods.go:61] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.199388 1968426 system_pods.go:61] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.199392 1968426 system_pods.go:61] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.199400 1968426 system_pods.go:61] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.199403 1968426 system_pods.go:61] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.199408 1968426 system_pods.go:61] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:34.199415 1968426 system_pods.go:74] duration metric: took 3.630946ms to wait for pod list to return data ...
	I1217 11:55:34.199426 1968426 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:55:34.202301 1968426 default_sa.go:45] found service account: "default"
	I1217 11:55:34.202324 1968426 default_sa.go:55] duration metric: took 2.891984ms for default service account to be created ...
	I1217 11:55:34.202343 1968426 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:55:34.205662 1968426 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:34.205701 1968426 system_pods.go:89] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:34.205708 1968426 system_pods.go:89] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.205715 1968426 system_pods.go:89] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.205721 1968426 system_pods.go:89] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.205725 1968426 system_pods.go:89] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.205729 1968426 system_pods.go:89] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.205733 1968426 system_pods.go:89] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.205738 1968426 system_pods.go:89] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:34.205764 1968426 retry.go:31] will retry after 248.175498ms: missing components: kube-dns
	I1217 11:55:34.457746 1968426 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:34.457785 1968426 system_pods.go:89] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:34.457794 1968426 system_pods.go:89] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.457802 1968426 system_pods.go:89] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.457806 1968426 system_pods.go:89] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.457812 1968426 system_pods.go:89] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.457817 1968426 system_pods.go:89] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.457823 1968426 system_pods.go:89] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.457830 1968426 system_pods.go:89] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:34.457859 1968426 retry.go:31] will retry after 326.462384ms: missing components: kube-dns
	I1217 11:55:34.789392 1968426 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:34.789420 1968426 system_pods.go:89] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Running
	I1217 11:55:34.789426 1968426 system_pods.go:89] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.789429 1968426 system_pods.go:89] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.789433 1968426 system_pods.go:89] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.789438 1968426 system_pods.go:89] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.789445 1968426 system_pods.go:89] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.789450 1968426 system_pods.go:89] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.789454 1968426 system_pods.go:89] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Running
	I1217 11:55:34.789464 1968426 system_pods.go:126] duration metric: took 587.114184ms to wait for k8s-apps to be running ...
	I1217 11:55:34.789478 1968426 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:55:34.789560 1968426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:34.803259 1968426 system_svc.go:56] duration metric: took 13.763269ms WaitForService to wait for kubelet
	I1217 11:55:34.803301 1968426 kubeadm.go:587] duration metric: took 13.950081466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:34.803337 1968426 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:55:34.806420 1968426 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:55:34.806448 1968426 node_conditions.go:123] node cpu capacity is 8
	I1217 11:55:34.806467 1968426 node_conditions.go:105] duration metric: took 3.124048ms to run NodePressure ...
	I1217 11:55:34.806479 1968426 start.go:242] waiting for startup goroutines ...
	I1217 11:55:34.806487 1968426 start.go:247] waiting for cluster config update ...
	I1217 11:55:34.806497 1968426 start.go:256] writing updated cluster config ...
	I1217 11:55:34.806796 1968426 ssh_runner.go:195] Run: rm -f paused
	I1217 11:55:34.811172 1968426 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:34.815270 1968426 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r2wht" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.821953 1968426 pod_ready.go:94] pod "coredns-66bc5c9577-r2wht" is "Ready"
	I1217 11:55:34.821976 1968426 pod_ready.go:86] duration metric: took 6.6841ms for pod "coredns-66bc5c9577-r2wht" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.824088 1968426 pod_ready.go:83] waiting for pod "etcd-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.827935 1968426 pod_ready.go:94] pod "etcd-auto-213935" is "Ready"
	I1217 11:55:34.827960 1968426 pod_ready.go:86] duration metric: took 3.852249ms for pod "etcd-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.829811 1968426 pod_ready.go:83] waiting for pod "kube-apiserver-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.833423 1968426 pod_ready.go:94] pod "kube-apiserver-auto-213935" is "Ready"
	I1217 11:55:34.833444 1968426 pod_ready.go:86] duration metric: took 3.613745ms for pod "kube-apiserver-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.835327 1968426 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:35.215776 1968426 pod_ready.go:94] pod "kube-controller-manager-auto-213935" is "Ready"
	I1217 11:55:35.215807 1968426 pod_ready.go:86] duration metric: took 380.458261ms for pod "kube-controller-manager-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:35.416163 1968426 pod_ready.go:83] waiting for pod "kube-proxy-54kwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:35.816672 1968426 pod_ready.go:94] pod "kube-proxy-54kwh" is "Ready"
	I1217 11:55:35.816705 1968426 pod_ready.go:86] duration metric: took 400.512173ms for pod "kube-proxy-54kwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:36.015880 1968426 pod_ready.go:83] waiting for pod "kube-scheduler-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:36.416249 1968426 pod_ready.go:94] pod "kube-scheduler-auto-213935" is "Ready"
	I1217 11:55:36.416277 1968426 pod_ready.go:86] duration metric: took 400.367181ms for pod "kube-scheduler-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:36.416289 1968426 pod_ready.go:40] duration metric: took 1.605081184s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:36.462404 1968426 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:55:36.464325 1968426 out.go:179] * Done! kubectl is now configured to use "auto-213935" cluster and "default" namespace by default
	W1217 11:55:34.697503 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:37.196552 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:38.197165 1968420 pod_ready.go:94] pod "coredns-66bc5c9577-t66bd" is "Ready"
	I1217 11:55:38.197201 1968420 pod_ready.go:86] duration metric: took 31.506592273s for pod "coredns-66bc5c9577-t66bd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.199718 1968420 pod_ready.go:83] waiting for pod "etcd-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.204687 1968420 pod_ready.go:94] pod "etcd-embed-certs-542273" is "Ready"
	I1217 11:55:38.204716 1968420 pod_ready.go:86] duration metric: took 4.969846ms for pod "etcd-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.207346 1968420 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.212242 1968420 pod_ready.go:94] pod "kube-apiserver-embed-certs-542273" is "Ready"
	I1217 11:55:38.212273 1968420 pod_ready.go:86] duration metric: took 4.899712ms for pod "kube-apiserver-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.214736 1968420 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.395360 1968420 pod_ready.go:94] pod "kube-controller-manager-embed-certs-542273" is "Ready"
	I1217 11:55:38.395391 1968420 pod_ready.go:86] duration metric: took 180.631954ms for pod "kube-controller-manager-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.595230 1968420 pod_ready.go:83] waiting for pod "kube-proxy-gfbw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.995218 1968420 pod_ready.go:94] pod "kube-proxy-gfbw9" is "Ready"
	I1217 11:55:38.995250 1968420 pod_ready.go:86] duration metric: took 399.986048ms for pod "kube-proxy-gfbw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:39.194526 1968420 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:39.595252 1968420 pod_ready.go:94] pod "kube-scheduler-embed-certs-542273" is "Ready"
	I1217 11:55:39.595285 1968420 pod_ready.go:86] duration metric: took 400.717588ms for pod "kube-scheduler-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:39.595302 1968420 pod_ready.go:40] duration metric: took 32.909508699s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:39.642757 1968420 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:55:39.644690 1968420 out.go:179] * Done! kubectl is now configured to use "embed-certs-542273" cluster and "default" namespace by default
	W1217 11:55:35.996074 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:38.496938 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:40.996388 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:43.496698 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 17 11:55:12 no-preload-737478 crio[605]: time="2025-12-17T11:55:12.425833121Z" level=info msg="Started container" PID=1771 containerID=c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper id=b03e6b26-fcd5-417c-b1e6-3fc425a0371b name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b09b5e7103ba9b519897af8233d257b7e46be83c208e3da9adc285dc9cfd4d9
	Dec 17 11:55:12 no-preload-737478 crio[605]: time="2025-12-17T11:55:12.506320657Z" level=info msg="Removing container: d671879f367dd6f63cfa509fb293aece6acd5463eb00e22f603b0e7a7649c0d5" id=d2219b65-6c82-4f8e-b002-37332e32fcc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:12 no-preload-737478 crio[605]: time="2025-12-17T11:55:12.5213633Z" level=info msg="Removed container d671879f367dd6f63cfa509fb293aece6acd5463eb00e22f603b0e7a7649c0d5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper" id=d2219b65-6c82-4f8e-b002-37332e32fcc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.537777982Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8189faca-ab32-403b-b436-7e6d42b41624 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.538837649Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=20991cb0-cf3a-4dc4-b73b-d258ab4a337b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.540250202Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8c6c9514-6581-444d-b33a-6e2ecac5f8bf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.540654363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.546414641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.546630995Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f63d09b051e7d9ed8dd6e0f97d817a1e897c67202815cf5c6427add304af439/merged/etc/passwd: no such file or directory"
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.546681767Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f63d09b051e7d9ed8dd6e0f97d817a1e897c67202815cf5c6427add304af439/merged/etc/group: no such file or directory"
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.547746043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.582053187Z" level=info msg="Created container 1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4: kube-system/storage-provisioner/storage-provisioner" id=8c6c9514-6581-444d-b33a-6e2ecac5f8bf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.582778477Z" level=info msg="Starting container: 1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4" id=1ca3b4d3-b8fd-4801-9932-3b2cacf7df98 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.585331355Z" level=info msg="Started container" PID=1785 containerID=1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4 description=kube-system/storage-provisioner/storage-provisioner id=1ca3b4d3-b8fd-4801-9932-3b2cacf7df98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0871373db009f4d1d2392604362696bee47f86dff279ecedb95529e5102f9b3
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.374802373Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=98f30f8c-9d6c-417c-beb1-93ce61934326 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.375912247Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9d75f1a2-5153-4c4d-8702-4f9b1387acfd name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.377053825Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper" id=ba994f5e-0023-4637-aed7-9cde3e6142ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.377224198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.383387541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.384039946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.422976594Z" level=info msg="Created container 0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper" id=ba994f5e-0023-4637-aed7-9cde3e6142ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.423692112Z" level=info msg="Starting container: 0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee" id=77ae6bcd-08f6-43a5-a8d7-bec8111958bc name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.425652471Z" level=info msg="Started container" PID=1819 containerID=0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper id=77ae6bcd-08f6-43a5-a8d7-bec8111958bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b09b5e7103ba9b519897af8233d257b7e46be83c208e3da9adc285dc9cfd4d9
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.578470941Z" level=info msg="Removing container: c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372" id=df0a1d84-8710-4655-8638-edfdcd536d36 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.596696417Z" level=info msg="Removed container c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper" id=df0a1d84-8710-4655-8638-edfdcd536d36 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0e25eb79c6bd9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   1b09b5e7103ba       dashboard-metrics-scraper-867fb5f87b-lzxn4   kubernetes-dashboard
	1e8a997b4b341       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   e0871373db009       storage-provisioner                          kube-system
	fd3aeebcdab23       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   abb45e930b43a       kubernetes-dashboard-b84665fb8-t9pxx         kubernetes-dashboard
	8c0fc19eb2c75       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           55 seconds ago      Running             coredns                     0                   93150660521ec       coredns-7d764666f9-n2kvr                     kube-system
	9ad911b33c8d7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   a530b9d4febd3       busybox                                      default
	e366a6880a703       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   e0871373db009       storage-provisioner                          kube-system
	c3857941ca2aa       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           55 seconds ago      Running             kindnet-cni                 0                   dd810bab5711c       kindnet-fnspp                                kube-system
	1d2b1bb8a1b76       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           55 seconds ago      Running             kube-proxy                  0                   1961f766473ef       kube-proxy-5tkm8                             kube-system
	dfa862cc6c124       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           58 seconds ago      Running             etcd                        0                   4ac0cb416820f       etcd-no-preload-737478                       kube-system
	8e9c526071331       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           58 seconds ago      Running             kube-apiserver              0                   94b32c89adeba       kube-apiserver-no-preload-737478             kube-system
	59ffeef8ed703       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           58 seconds ago      Running             kube-scheduler              0                   52322dcee8c2a       kube-scheduler-no-preload-737478             kube-system
	2927eecd91f4b       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           58 seconds ago      Running             kube-controller-manager     0                   a76158e34139f       kube-controller-manager-no-preload-737478    kube-system
	
	
	==> coredns [8c0fc19eb2c75cce822364a4b57ad3d996f36be504d773da1f4a3833e438910b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44889 - 24894 "HINFO IN 8960180135453485830.4829056896470198371. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030488511s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-737478
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-737478
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=no-preload-737478
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_53_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:53:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-737478
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:55:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:55:34 +0000   Wed, 17 Dec 2025 11:53:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:55:34 +0000   Wed, 17 Dec 2025 11:53:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:55:34 +0000   Wed, 17 Dec 2025 11:53:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:55:34 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-737478
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                247c8806-279e-4c7a-81b2-36bc1da2ec08
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-n2kvr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-no-preload-737478                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-fnspp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-737478              250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-737478     200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-5tkm8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-737478              100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-lzxn4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-t9pxx          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  115s  node-controller  Node no-preload-737478 event: Registered Node no-preload-737478 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-737478 event: Registered Node no-preload-737478 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [dfa862cc6c124cbff58725fd6b60cb1a8b9eefcaf56e3fc283931533b497b6f9] <==
	{"level":"info","ts":"2025-12-17T11:54:50.949171Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T11:54:50.949229Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T11:54:50.949556Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-17T11:54:50.949580Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-17T11:54:50.949915Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-17T11:54:50.949997Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T11:54:50.950075Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T11:54:51.938425Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:51.938523Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:51.938622Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:51.938649Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:54:51.938666Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:51.939582Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:51.939620Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:54:51.939641Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:51.939649Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:51.941013Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:54:51.941031Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:54:51.941011Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-737478 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T11:54:51.941299Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T11:54:51.941317Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T11:54:51.943337Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:54:51.943406Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:54:51.945580Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T11:54:51.945580Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 11:55:49 up  5:38,  0 user,  load average: 5.58, 4.15, 2.59
	Linux no-preload-737478 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c3857941ca2aae221674eea310f456831e8d058f682132b671e62d0c96c1fc17] <==
	I1217 11:54:54.015212       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:54:54.015580       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 11:54:54.015779       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:54:54.015802       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:54:54.015824       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:54:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:54:54.218812       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:54:54.219068       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:54:54.219101       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:54:54.219219       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:54:54.710964       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:54:54.711012       1 metrics.go:72] Registering metrics
	I1217 11:54:54.711086       1 controller.go:711] "Syncing nftables rules"
	I1217 11:55:04.218707       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:55:04.218821       1 main.go:301] handling current node
	I1217 11:55:14.218675       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:55:14.218718       1 main.go:301] handling current node
	I1217 11:55:24.218275       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:55:24.218321       1 main.go:301] handling current node
	I1217 11:55:34.217714       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:55:34.217758       1 main.go:301] handling current node
	I1217 11:55:44.221046       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:55:44.221084       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8e9c5260713310721b633c55bf538fd5250281666a4f79e7afb0e39f48e8752a] <==
	I1217 11:54:52.918886       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 11:54:52.918892       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 11:54:52.919600       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 11:54:52.919984       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 11:54:52.920059       1 aggregator.go:187] initial CRD sync complete...
	I1217 11:54:52.920091       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 11:54:52.920130       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:54:52.920156       1 cache.go:39] Caches are synced for autoregister controller
	I1217 11:54:52.925711       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 11:54:52.926520       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 11:54:52.966585       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:52.966611       1 policy_source.go:248] refreshing policies
	I1217 11:54:52.977558       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:54:52.978080       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:53.205837       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:54:53.238525       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:54:53.262133       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:54:53.272889       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:54:53.282355       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:54:53.339806       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.104.102"}
	I1217 11:54:53.355795       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.95.82"}
	I1217 11:54:53.823323       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 11:54:56.549515       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:54:56.600566       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:54:56.700081       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2927eecd91f4b36c104d665f79cbb47dbc7e16d7f360c6a4e4e977b70d7eaf43] <==
	I1217 11:54:56.053030       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.053052       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.056677       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.058905       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:56.059803       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.060061       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.060123       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.060190       1 range_allocator.go:177] "Sending events to api server"
	I1217 11:54:56.060255       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 11:54:56.060281       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:56.061281       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.060494       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.061807       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 11:54:56.061959       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-737478"
	I1217 11:54:56.062083       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 11:54:56.061443       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.060376       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.061367       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.061417       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.061464       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.061396       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.156913       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.156931       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 11:54:56.156936       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 11:54:56.159051       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [1d2b1bb8a1b76843007cd338cd29ad6ab7ffd7691330930addf1432fa7421ec5] <==
	I1217 11:54:53.795822       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:54:53.872114       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:53.972713       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:53.972760       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 11:54:53.972895       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:54:53.999441       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:54:53.999517       1 server_linux.go:136] "Using iptables Proxier"
	I1217 11:54:54.006735       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:54:54.007281       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 11:54:54.007448       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:54.009461       1 config.go:200] "Starting service config controller"
	I1217 11:54:54.011252       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:54:54.009632       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:54:54.009663       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:54:54.012073       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:54:54.011503       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:54:54.009828       1 config.go:309] "Starting node config controller"
	I1217 11:54:54.012612       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:54:54.012663       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:54:54.112434       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 11:54:54.112528       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:54:54.112562       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [59ffeef8ed7039998fb2d90ffdb8f586577c7fac1aeca5d33293a0883dcf6fe1] <==
	I1217 11:54:51.234050       1 serving.go:386] Generated self-signed cert in-memory
	W1217 11:54:52.838367       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 11:54:52.839273       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 11:54:52.839337       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 11:54:52.839349       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 11:54:52.884744       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 11:54:52.884909       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:52.888160       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 11:54:52.888312       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:54:52.888324       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:52.888341       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 11:54:52.905483       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 11:54:52.905492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1217 11:54:52.989044       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 11:55:10 no-preload-737478 kubelet[751]: E1217 11:55:10.193771     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lzxn4_kubernetes-dashboard(62869478-ce02-4a64-bfee-d8127455619f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" podUID="62869478-ce02-4a64-bfee-d8127455619f"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: E1217 11:55:12.373877     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: I1217 11:55:12.373913     751 scope.go:122] "RemoveContainer" containerID="d671879f367dd6f63cfa509fb293aece6acd5463eb00e22f603b0e7a7649c0d5"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: I1217 11:55:12.499034     751 scope.go:122] "RemoveContainer" containerID="d671879f367dd6f63cfa509fb293aece6acd5463eb00e22f603b0e7a7649c0d5"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: E1217 11:55:12.499322     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: I1217 11:55:12.499369     751 scope.go:122] "RemoveContainer" containerID="c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: E1217 11:55:12.499588     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lzxn4_kubernetes-dashboard(62869478-ce02-4a64-bfee-d8127455619f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" podUID="62869478-ce02-4a64-bfee-d8127455619f"
	Dec 17 11:55:20 no-preload-737478 kubelet[751]: E1217 11:55:20.193738     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:20 no-preload-737478 kubelet[751]: I1217 11:55:20.193793     751 scope.go:122] "RemoveContainer" containerID="c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372"
	Dec 17 11:55:20 no-preload-737478 kubelet[751]: E1217 11:55:20.194036     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lzxn4_kubernetes-dashboard(62869478-ce02-4a64-bfee-d8127455619f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" podUID="62869478-ce02-4a64-bfee-d8127455619f"
	Dec 17 11:55:24 no-preload-737478 kubelet[751]: I1217 11:55:24.537262     751 scope.go:122] "RemoveContainer" containerID="e366a6880a7038192225e1a0e3f1dfae39b7b0e063b30315983cee12d05f0372"
	Dec 17 11:55:33 no-preload-737478 kubelet[751]: E1217 11:55:33.013663     751 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n2kvr" containerName="coredns"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: E1217 11:55:37.374150     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: I1217 11:55:37.374185     751 scope.go:122] "RemoveContainer" containerID="c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: I1217 11:55:37.575595     751 scope.go:122] "RemoveContainer" containerID="c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: E1217 11:55:37.575941     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: I1217 11:55:37.575964     751 scope.go:122] "RemoveContainer" containerID="0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: E1217 11:55:37.576138     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lzxn4_kubernetes-dashboard(62869478-ce02-4a64-bfee-d8127455619f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" podUID="62869478-ce02-4a64-bfee-d8127455619f"
	Dec 17 11:55:40 no-preload-737478 kubelet[751]: E1217 11:55:40.192932     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:40 no-preload-737478 kubelet[751]: I1217 11:55:40.192975     751 scope.go:122] "RemoveContainer" containerID="0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee"
	Dec 17 11:55:40 no-preload-737478 kubelet[751]: E1217 11:55:40.193142     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lzxn4_kubernetes-dashboard(62869478-ce02-4a64-bfee-d8127455619f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" podUID="62869478-ce02-4a64-bfee-d8127455619f"
	Dec 17 11:55:47 no-preload-737478 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:55:47 no-preload-737478 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:55:47 no-preload-737478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:47 no-preload-737478 systemd[1]: kubelet.service: Consumed 1.896s CPU time.
	
	
	==> kubernetes-dashboard [fd3aeebcdab235840cddfdf9ae02671bf3de7091045cb6660338a7cb39e126c4] <==
	2025/12/17 11:55:06 Using namespace: kubernetes-dashboard
	2025/12/17 11:55:06 Using in-cluster config to connect to apiserver
	2025/12/17 11:55:06 Using secret token for csrf signing
	2025/12/17 11:55:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 11:55:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 11:55:06 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/17 11:55:06 Generating JWE encryption key
	2025/12/17 11:55:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 11:55:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 11:55:06 Initializing JWE encryption key from synchronized object
	2025/12/17 11:55:06 Creating in-cluster Sidecar client
	2025/12/17 11:55:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 11:55:06 Serving insecurely on HTTP port: 9090
	2025/12/17 11:55:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 11:55:06 Starting overwatch
	
	
	==> storage-provisioner [1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4] <==
	I1217 11:55:24.600306       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:55:24.610257       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:55:24.610352       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 11:55:24.612920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:28.069330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:32.330160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:35.929052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:38.982581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:42.004896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:42.010199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:55:42.010495       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:55:42.010604       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c69f3844-a665-403c-a70c-0a1934605a75", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-737478_6107e95e-6e8d-4fc9-bbbb-93a57b6f037b became leader
	I1217 11:55:42.010681       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-737478_6107e95e-6e8d-4fc9-bbbb-93a57b6f037b!
	W1217 11:55:42.012852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:42.016612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:55:42.111728       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-737478_6107e95e-6e8d-4fc9-bbbb-93a57b6f037b!
	W1217 11:55:44.019606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:44.024069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:46.028126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:46.034600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:48.038421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:48.042404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e366a6880a7038192225e1a0e3f1dfae39b7b0e063b30315983cee12d05f0372] <==
	I1217 11:54:53.762287       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 11:55:23.766960       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737478 -n no-preload-737478
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737478 -n no-preload-737478: exit status 2 (334.004048ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-737478 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-737478
helpers_test.go:244: (dbg) docker inspect no-preload-737478:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87",
	        "Created": "2025-12-17T11:53:25.367483082Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1963584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:54:43.619296736Z",
	            "FinishedAt": "2025-12-17T11:54:42.574645746Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/hosts",
	        "LogPath": "/var/lib/docker/containers/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87/7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87-json.log",
	        "Name": "/no-preload-737478",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-737478:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-737478",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7dea84a847e15ce5cd4cb59487aa054875acb5e0476db82e43cf87dafa1c5a87",
	                "LowerDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276/merged",
	                "UpperDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276/diff",
	                "WorkDir": "/var/lib/docker/overlay2/005920dffcb7a10d434dc4823a7a8e71a66d93b49078b09f4ea13a55dbb36276/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-737478",
	                "Source": "/var/lib/docker/volumes/no-preload-737478/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-737478",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-737478",
	                "name.minikube.sigs.k8s.io": "no-preload-737478",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ecd3d650b18eca76e5e9112152a3f611510afe86c894d4ff96750ad4b561baad",
	            "SandboxKey": "/var/run/docker/netns/ecd3d650b18e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34626"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34627"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34630"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34628"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34629"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-737478": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c30ab0942ebedfa9daed9e159e1243b5098ef936ff9c2403568c9e33b8451ef1",
	                    "EndpointID": "f8a92ccaf8dc9721c76c142f663c191f178822397dec435fa963baf9d95daacb",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ca:b5:7b:3a:ef:ab",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-737478",
	                        "7dea84a847e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737478 -n no-preload-737478
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737478 -n no-preload-737478: exit status 2 (333.48539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-737478 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-737478 logs -n 25: (1.260837441s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p no-preload-737478 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p newest-cni-601829 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-601829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p embed-certs-542273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p embed-certs-542273 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-737478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ image   │ newest-cni-601829 image list --format=json                                                                                                                                                                                                         │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ pause   │ -p newest-cni-601829 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-382022 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ delete  │ -p newest-cni-601829                                                                                                                                                                                                                               │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ delete  │ -p newest-cni-601829                                                                                                                                                                                                                               │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable dashboard -p embed-certs-542273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p auto-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                            │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 pgrep -a kubelet                                                                                                                                                                                                                    │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ image   │ no-preload-737478 image list --format=json                                                                                                                                                                                                         │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ pause   │ -p no-preload-737478 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:55:05
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:55:05.915015 1972864 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:55:05.915174 1972864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:05.915182 1972864 out.go:374] Setting ErrFile to fd 2...
	I1217 11:55:05.915188 1972864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:05.915474 1972864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:55:05.916077 1972864 out.go:368] Setting JSON to false
	I1217 11:55:05.917928 1972864 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20251,"bootTime":1765952255,"procs":433,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:55:05.918012 1972864 start.go:143] virtualization: kvm guest
	I1217 11:55:05.920036 1972864 out.go:179] * [default-k8s-diff-port-382022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:55:05.921753 1972864 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:55:05.921776 1972864 notify.go:221] Checking for updates...
	I1217 11:55:05.924500 1972864 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:55:05.926029 1972864 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:05.927481 1972864 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:55:05.928660 1972864 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:55:05.930205 1972864 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:55:05.932089 1972864 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:05.932942 1972864 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:55:05.966016 1972864 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:55:05.966214 1972864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:06.051196 1972864 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:55:06.035134766 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:55:06.051362 1972864 docker.go:319] overlay module found
	I1217 11:55:06.053307 1972864 out.go:179] * Using the docker driver based on existing profile
	I1217 11:55:06.055121 1972864 start.go:309] selected driver: docker
	I1217 11:55:06.055187 1972864 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:06.055310 1972864 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:55:06.056083 1972864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:06.137330 1972864 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:55:06.123341974 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:55:06.137759 1972864 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:06.137803 1972864 cni.go:84] Creating CNI manager for ""
	I1217 11:55:06.137872 1972864 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:55:06.137919 1972864 start.go:353] cluster config:
	{Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:05.330562 1968420 node_ready.go:49] node "embed-certs-542273" is "Ready"
	I1217 11:55:05.330602 1968420 node_ready.go:38] duration metric: took 2.307590665s for node "embed-certs-542273" to be "Ready" ...
	I1217 11:55:05.330621 1968420 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:55:05.330685 1968420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:55:06.139050 1968420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.109697945s)
	I1217 11:55:06.139117 1968420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.099340031s)
	I1217 11:55:06.139286 1968420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.913743017s)
	I1217 11:55:06.139473 1972864 out.go:179] * Starting "default-k8s-diff-port-382022" primary control-plane node in "default-k8s-diff-port-382022" cluster
	I1217 11:55:06.139346 1968420 api_server.go:72] duration metric: took 3.348259497s to wait for apiserver process to appear ...
	I1217 11:55:06.139489 1968420 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:55:06.139509 1968420 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 11:55:06.140802 1968420 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-542273 addons enable metrics-server
	
	I1217 11:55:06.140797 1972864 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:55:06.141926 1972864 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:55:06.144623 1968420 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 11:55:06.144646 1968420 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 11:55:06.155204 1968420 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 11:55:06.143128 1972864 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:55:06.143168 1972864 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:55:06.143181 1972864 cache.go:65] Caching tarball of preloaded images
	I1217 11:55:06.143222 1972864 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:55:06.143289 1972864 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:55:06.143302 1972864 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 11:55:06.143455 1972864 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/config.json ...
	I1217 11:55:06.170086 1972864 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:55:06.170124 1972864 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:55:06.170144 1972864 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:55:06.170183 1972864 start.go:360] acquireMachinesLock for default-k8s-diff-port-382022: {Name:mkc3ede9873fa3c6fdab76bd3c88723bee4b3785 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:06.170258 1972864 start.go:364] duration metric: took 50.675µs to acquireMachinesLock for "default-k8s-diff-port-382022"
	I1217 11:55:06.170281 1972864 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:55:06.170291 1972864 fix.go:54] fixHost starting: 
	I1217 11:55:06.170622 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:06.191065 1972864 fix.go:112] recreateIfNeeded on default-k8s-diff-port-382022: state=Stopped err=<nil>
	W1217 11:55:06.191102 1972864 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 11:55:05.010946 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:07.509563 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	I1217 11:55:06.156501 1968420 addons.go:530] duration metric: took 3.364667334s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 11:55:06.639611 1968420 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 11:55:06.645152 1968420 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 11:55:06.646332 1968420 api_server.go:141] control plane version: v1.34.3
	I1217 11:55:06.646360 1968420 api_server.go:131] duration metric: took 506.863143ms to wait for apiserver health ...
	I1217 11:55:06.646370 1968420 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:55:06.653410 1968420 system_pods.go:59] 8 kube-system pods found
	I1217 11:55:06.653442 1968420 system_pods.go:61] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:06.653453 1968420 system_pods.go:61] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:06.653461 1968420 system_pods.go:61] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 11:55:06.653469 1968420 system_pods.go:61] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:06.653477 1968420 system_pods.go:61] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:06.653484 1968420 system_pods.go:61] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:06.653496 1968420 system_pods.go:61] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:06.653502 1968420 system_pods.go:61] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:06.653514 1968420 system_pods.go:74] duration metric: took 7.137942ms to wait for pod list to return data ...
	I1217 11:55:06.653523 1968420 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:55:06.656046 1968420 default_sa.go:45] found service account: "default"
	I1217 11:55:06.656064 1968420 default_sa.go:55] duration metric: took 2.535516ms for default service account to be created ...
	I1217 11:55:06.656073 1968420 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:55:06.658845 1968420 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:06.658872 1968420 system_pods.go:89] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:06.658881 1968420 system_pods.go:89] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:06.658890 1968420 system_pods.go:89] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 11:55:06.658898 1968420 system_pods.go:89] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:06.658912 1968420 system_pods.go:89] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:06.658920 1968420 system_pods.go:89] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:06.658936 1968420 system_pods.go:89] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:06.658943 1968420 system_pods.go:89] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:06.658952 1968420 system_pods.go:126] duration metric: took 2.874094ms to wait for k8s-apps to be running ...
	I1217 11:55:06.658961 1968420 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:55:06.659011 1968420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:06.674841 1968420 system_svc.go:56] duration metric: took 15.867236ms WaitForService to wait for kubelet
	I1217 11:55:06.674874 1968420 kubeadm.go:587] duration metric: took 3.883790125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:06.674896 1968420 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:55:06.679469 1968420 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:55:06.679504 1968420 node_conditions.go:123] node cpu capacity is 8
	I1217 11:55:06.679524 1968420 node_conditions.go:105] duration metric: took 4.620965ms to run NodePressure ...
	I1217 11:55:06.679551 1968420 start.go:242] waiting for startup goroutines ...
	I1217 11:55:06.679561 1968420 start.go:247] waiting for cluster config update ...
	I1217 11:55:06.679575 1968420 start.go:256] writing updated cluster config ...
	I1217 11:55:06.679934 1968420 ssh_runner.go:195] Run: rm -f paused
	I1217 11:55:06.685757 1968420 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:06.690580 1968420 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t66bd" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 11:55:08.696479 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:05.221867 1968426 out.go:252]   - Generating certificates and keys ...
	I1217 11:55:05.222013 1968426 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:55:05.222143 1968426 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:55:05.515027 1968426 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:55:05.840693 1968426 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:55:06.051969 1968426 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:55:06.488194 1968426 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:55:07.147959 1968426 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:55:07.148173 1968426 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-213935 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:55:07.452899 1968426 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:55:07.453095 1968426 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-213935 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:55:07.556891 1968426 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:55:07.863151 1968426 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:55:07.920730 1968426 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:55:07.920839 1968426 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:55:08.231818 1968426 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:55:08.551353 1968426 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:55:08.710825 1968426 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:55:08.929825 1968426 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:55:09.189615 1968426 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:55:09.190223 1968426 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:55:09.194170 1968426 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:55:06.193174 1972864 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-382022" ...
	I1217 11:55:06.193264 1972864 cli_runner.go:164] Run: docker start default-k8s-diff-port-382022
	I1217 11:55:06.526174 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:06.552214 1972864 kic.go:430] container "default-k8s-diff-port-382022" state is running.
	I1217 11:55:06.552760 1972864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:55:06.577698 1972864 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/config.json ...
	I1217 11:55:06.577964 1972864 machine.go:94] provisionDockerMachine start ...
	I1217 11:55:06.578041 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:06.598700 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:06.599024 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:06.599042 1972864 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:55:06.599663 1972864 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45878->127.0.0.1:34641: read: connection reset by peer
	I1217 11:55:09.755152 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382022
	
	I1217 11:55:09.755203 1972864 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-382022"
	I1217 11:55:09.755274 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:09.782521 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:09.782860 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:09.782881 1972864 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-382022 && echo "default-k8s-diff-port-382022" | sudo tee /etc/hostname
	I1217 11:55:09.951044 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382022
	
	I1217 11:55:09.951162 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:09.977929 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:09.978252 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:09.978284 1972864 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-382022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-382022/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-382022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:55:10.136717 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:55:10.136752 1972864 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:55:10.136778 1972864 ubuntu.go:190] setting up certificates
	I1217 11:55:10.136791 1972864 provision.go:84] configureAuth start
	I1217 11:55:10.136861 1972864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:55:10.163824 1972864 provision.go:143] copyHostCerts
	I1217 11:55:10.163910 1972864 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:55:10.163932 1972864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:55:10.164006 1972864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:55:10.164229 1972864 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:55:10.164249 1972864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:55:10.164313 1972864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:55:10.164476 1972864 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:55:10.164491 1972864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:55:10.164547 1972864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:55:10.164663 1972864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-382022 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-382022 localhost minikube]
	I1217 11:55:10.370953 1972864 provision.go:177] copyRemoteCerts
	I1217 11:55:10.371025 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:55:10.371104 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:10.396183 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:10.499250 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:55:10.523707 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 11:55:10.547625 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:55:10.573466 1972864 provision.go:87] duration metric: took 436.653063ms to configureAuth
	I1217 11:55:10.573501 1972864 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:55:10.573749 1972864 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:10.573882 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:10.597313 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:10.597651 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:10.597694 1972864 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:55:11.011460 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:55:11.011491 1972864 machine.go:97] duration metric: took 4.43350855s to provisionDockerMachine
	I1217 11:55:11.011507 1972864 start.go:293] postStartSetup for "default-k8s-diff-port-382022" (driver="docker")
	I1217 11:55:11.011519 1972864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:55:11.011621 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:55:11.011686 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:11.034079 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:11.141182 1972864 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:55:11.145913 1972864 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:55:11.145947 1972864 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:55:11.145962 1972864 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:55:11.146017 1972864 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:55:11.146109 1972864 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:55:11.146199 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:55:11.157064 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:55:11.177488 1972864 start.go:296] duration metric: took 165.962986ms for postStartSetup
	I1217 11:55:11.177607 1972864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:55:11.177653 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:11.204846 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:11.308658 1972864 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:55:11.315100 1972864 fix.go:56] duration metric: took 5.144801074s for fixHost
	I1217 11:55:11.315129 1972864 start.go:83] releasing machines lock for "default-k8s-diff-port-382022", held for 5.144858234s
	I1217 11:55:11.315199 1972864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:55:11.338745 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:55:11.338818 1972864 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:55:11.338829 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:55:11.338879 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:55:11.338917 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:55:11.338953 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:55:11.339012 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:55:11.339099 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:55:11.339164 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:11.365430 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:11.495096 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:55:11.525235 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:55:11.552785 1972864 ssh_runner.go:195] Run: openssl version
	I1217 11:55:11.562436 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.573636 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:55:11.584676 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.589300 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.589361 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.641338 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:55:11.652096 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.661866 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:55:11.674821 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.680734 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.680803 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.729519 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:55:11.740678 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.750972 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:55:11.760660 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.766240 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.766330 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.818447 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:55:11.829562 1972864 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:55:11.834749 1972864 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:55:11.840877 1972864 ssh_runner.go:195] Run: cat /version.json
	I1217 11:55:11.840989 1972864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:55:11.929334 1972864 ssh_runner.go:195] Run: systemctl --version
	I1217 11:55:11.937487 1972864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:55:11.993284 1972864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:55:11.999774 1972864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:55:11.999914 1972864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:55:12.011051 1972864 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 11:55:12.011078 1972864 start.go:496] detecting cgroup driver to use...
	I1217 11:55:12.011113 1972864 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:55:12.011160 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:55:12.031108 1972864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:55:12.049252 1972864 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:55:12.049318 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:55:12.069726 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:55:12.085078 1972864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:55:12.209963 1972864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:55:12.316465 1972864 docker.go:234] disabling docker service ...
	I1217 11:55:12.316548 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:55:12.333455 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:55:12.348978 1972864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:55:12.457995 1972864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:55:12.596548 1972864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:55:12.612573 1972864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:55:12.628307 1972864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:55:12.628394 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.646387 1972864 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:55:12.646626 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.694485 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.704956 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.714913 1972864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:55:12.723885 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.734227 1972864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.746830 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.760588 1972864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:55:12.773376 1972864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:55:12.783526 1972864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:12.888361 1972864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:55:13.205918 1972864 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:55:13.205985 1972864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:55:13.210225 1972864 start.go:564] Will wait 60s for crictl version
	I1217 11:55:13.210287 1972864 ssh_runner.go:195] Run: which crictl
	I1217 11:55:13.214055 1972864 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:55:13.241923 1972864 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:55:13.242004 1972864 ssh_runner.go:195] Run: crio --version
	I1217 11:55:13.272236 1972864 ssh_runner.go:195] Run: crio --version
	I1217 11:55:13.311001 1972864 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1217 11:55:09.509773 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:11.511479 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:11.202390 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:13.702094 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:09.198065 1968426 out.go:252]   - Booting up control plane ...
	I1217 11:55:09.198177 1968426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:55:09.198291 1968426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:55:09.198350 1968426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:55:09.212059 1968426 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:55:09.212187 1968426 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:55:09.221646 1968426 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:55:09.222066 1968426 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:55:09.222112 1968426 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:55:09.330911 1968426 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:55:09.331064 1968426 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:55:10.335948 1968426 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0019159s
	I1217 11:55:10.337319 1968426 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 11:55:10.337604 1968426 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1217 11:55:10.337743 1968426 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 11:55:10.337845 1968426 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 11:55:12.778929 1968426 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.441400896s
	I1217 11:55:12.814618 1968426 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.476747339s
	I1217 11:55:13.313210 1972864 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-382022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:55:13.337041 1972864 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:55:13.342732 1972864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:55:13.357159 1972864 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:55:13.357335 1972864 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:55:13.357405 1972864 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:55:13.401031 1972864 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:55:13.401061 1972864 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:55:13.401124 1972864 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:55:13.435767 1972864 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:55:13.435795 1972864 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:55:13.435805 1972864 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.3 crio true true} ...
	I1217 11:55:13.435950 1972864 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-382022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:55:13.436036 1972864 ssh_runner.go:195] Run: crio config
	I1217 11:55:13.501779 1972864 cni.go:84] Creating CNI manager for ""
	I1217 11:55:13.501805 1972864 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:55:13.501824 1972864 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:55:13.501855 1972864 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-382022 NodeName:default-k8s-diff-port-382022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:55:13.502039 1972864 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-382022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:55:13.502129 1972864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:55:13.513932 1972864 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:55:13.514003 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:55:13.524819 1972864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 11:55:13.541119 1972864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:55:13.557811 1972864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 11:55:13.576185 1972864 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:55:13.581146 1972864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:55:13.594763 1972864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:13.713132 1972864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:55:13.740075 1972864 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022 for IP: 192.168.76.2
	I1217 11:55:13.740104 1972864 certs.go:195] generating shared ca certs ...
	I1217 11:55:13.740126 1972864 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:13.740330 1972864 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:55:13.740393 1972864 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:55:13.740406 1972864 certs.go:257] generating profile certs ...
	I1217 11:55:13.740497 1972864 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.key
	I1217 11:55:13.740635 1972864 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a
	I1217 11:55:13.740721 1972864 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key
	I1217 11:55:13.740846 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:55:13.740880 1972864 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:55:13.740887 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:55:13.740911 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:55:13.740934 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:55:13.740955 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:55:13.740993 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:55:13.741867 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:55:13.773747 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:55:13.804586 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:55:13.834707 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:55:13.869625 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 11:55:13.898355 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:55:13.922845 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:55:13.947273 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 11:55:13.972061 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:55:14.001446 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:55:14.027589 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:55:14.054132 1972864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:55:14.073337 1972864 ssh_runner.go:195] Run: openssl version
	I1217 11:55:14.082156 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.092983 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:55:14.103945 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.109736 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.109811 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.172322 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:55:14.182999 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.194351 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:55:14.210464 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.216214 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.216730 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.276197 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:55:14.287490 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.299145 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:55:14.312656 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.319064 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.319132 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.383567 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:55:14.400321 1972864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:55:14.410392 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 11:55:14.479493 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 11:55:14.544631 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 11:55:14.604881 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 11:55:14.664836 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 11:55:14.723985 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 11:55:14.789590 1972864 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:14.789736 1972864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:55:14.789811 1972864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:55:14.835987 1972864 cri.go:89] found id: "8a177f28a91aaa2beb33f612bda7e08cb55f517dc85cb28db4600fd97f28c910"
	I1217 11:55:14.836014 1972864 cri.go:89] found id: "b89ae3816c4a84a75d80384f2ac0ba58aaba5961009d2b0e4689a33fd8bee8c7"
	I1217 11:55:14.836031 1972864 cri.go:89] found id: "7b920b07dddb55c17343ecbdc9f777396c3b3e9c983a17164746d7f9865e23b0"
	I1217 11:55:14.836036 1972864 cri.go:89] found id: "6133fb2263ed69eedfc718e57501b70033d65802ca78d796131ff5830a512466"
	I1217 11:55:14.836040 1972864 cri.go:89] found id: ""
	I1217 11:55:14.836091 1972864 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 11:55:14.856976 1972864 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:14Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:55:14.857081 1972864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:55:14.871120 1972864 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 11:55:14.871143 1972864 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 11:55:14.871281 1972864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 11:55:14.883106 1972864 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:55:14.884356 1972864 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-382022" does not appear in /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:14.885194 1972864 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-1669348/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-382022" cluster setting kubeconfig missing "default-k8s-diff-port-382022" context setting]
	I1217 11:55:14.886458 1972864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:14.889149 1972864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 11:55:14.903999 1972864 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 11:55:14.904040 1972864 kubeadm.go:602] duration metric: took 32.890057ms to restartPrimaryControlPlane
	I1217 11:55:14.904052 1972864 kubeadm.go:403] duration metric: took 114.480546ms to StartCluster
	I1217 11:55:14.904073 1972864 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:14.904147 1972864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:14.906555 1972864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:14.907164 1972864 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:14.907249 1972864 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:55:14.907415 1972864 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:55:14.907508 1972864 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-382022"
	I1217 11:55:14.907527 1972864 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-382022"
	W1217 11:55:14.907570 1972864 addons.go:248] addon storage-provisioner should already be in state true
	I1217 11:55:14.907602 1972864 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:55:14.908114 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.908339 1972864 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-382022"
	I1217 11:55:14.908366 1972864 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-382022"
	W1217 11:55:14.908413 1972864 addons.go:248] addon dashboard should already be in state true
	I1217 11:55:14.908464 1972864 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:55:14.908988 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.909258 1972864 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-382022"
	I1217 11:55:14.909280 1972864 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-382022"
	I1217 11:55:14.909613 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.912097 1972864 out.go:179] * Verifying Kubernetes components...
	I1217 11:55:14.838768 1968426 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501405436s
	I1217 11:55:14.866719 1968426 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:55:14.879181 1968426 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:55:14.897113 1968426 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:55:14.897628 1968426 kubeadm.go:319] [mark-control-plane] Marking the node auto-213935 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:55:14.933522 1968426 kubeadm.go:319] [bootstrap-token] Using token: xj4v1d.49m4e5gs1ckj0agu
	I1217 11:55:14.914078 1972864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:14.941643 1972864 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 11:55:14.941685 1972864 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:55:14.943971 1972864 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:14.943992 1972864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:55:14.944059 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:14.944144 1972864 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 11:55:14.944150 1972864 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-382022"
	W1217 11:55:14.944168 1972864 addons.go:248] addon default-storageclass should already be in state true
	I1217 11:55:14.944231 1972864 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:55:14.944732 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.945181 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 11:55:14.945206 1972864 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 11:55:14.945256 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:14.985758 1972864 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:14.985961 1972864 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:55:14.986156 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:14.989510 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:14.991637 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:15.024797 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:15.126636 1972864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:55:15.136146 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 11:55:15.136172 1972864 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 11:55:15.137704 1972864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:15.145951 1972864 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-382022" to be "Ready" ...
	I1217 11:55:15.164090 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 11:55:15.164119 1972864 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 11:55:15.173306 1972864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:15.187105 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 11:55:15.187135 1972864 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 11:55:15.211988 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 11:55:15.212013 1972864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 11:55:15.235388 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 11:55:15.235420 1972864 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 11:55:15.261315 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 11:55:15.261346 1972864 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 11:55:15.282989 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 11:55:15.283043 1972864 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 11:55:15.310716 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 11:55:15.310761 1972864 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 11:55:15.336860 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:55:15.336898 1972864 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 11:55:15.359900 1972864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:55:14.936461 1968426 out.go:252]   - Configuring RBAC rules ...
	I1217 11:55:14.936651 1968426 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:55:14.943579 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:55:14.957528 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:55:14.965463 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:55:14.973724 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:55:14.980763 1968426 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:55:15.247181 1968426 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:55:15.680248 1968426 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:55:16.249887 1968426 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:55:16.251461 1968426 kubeadm.go:319] 
	I1217 11:55:16.252502 1968426 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:55:16.252519 1968426 kubeadm.go:319] 
	I1217 11:55:16.252621 1968426 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:55:16.252626 1968426 kubeadm.go:319] 
	I1217 11:55:16.252655 1968426 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:55:16.252727 1968426 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:55:16.252784 1968426 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:55:16.252789 1968426 kubeadm.go:319] 
	I1217 11:55:16.252861 1968426 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:55:16.252866 1968426 kubeadm.go:319] 
	I1217 11:55:16.252924 1968426 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:55:16.252929 1968426 kubeadm.go:319] 
	I1217 11:55:16.252987 1968426 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:55:16.253073 1968426 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:55:16.253152 1968426 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:55:16.253156 1968426 kubeadm.go:319] 
	I1217 11:55:16.253254 1968426 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:55:16.253341 1968426 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:55:16.253346 1968426 kubeadm.go:319] 
	I1217 11:55:16.253707 1968426 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xj4v1d.49m4e5gs1ckj0agu \
	I1217 11:55:16.253911 1968426 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:55:16.253969 1968426 kubeadm.go:319] 	--control-plane 
	I1217 11:55:16.253985 1968426 kubeadm.go:319] 
	I1217 11:55:16.254174 1968426 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:55:16.254236 1968426 kubeadm.go:319] 
	I1217 11:55:16.254382 1968426 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xj4v1d.49m4e5gs1ckj0agu \
	I1217 11:55:16.254589 1968426 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:55:16.256944 1968426 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:55:16.257103 1968426 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:55:16.257146 1968426 cni.go:84] Creating CNI manager for ""
	I1217 11:55:16.257166 1968426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:55:16.261095 1968426 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 11:55:16.649970 1972864 node_ready.go:49] node "default-k8s-diff-port-382022" is "Ready"
	I1217 11:55:16.650012 1972864 node_ready.go:38] duration metric: took 1.504027618s for node "default-k8s-diff-port-382022" to be "Ready" ...
	I1217 11:55:16.650047 1972864 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:55:16.650124 1972864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:55:17.435218 1972864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.297480989s)
	I1217 11:55:17.435331 1972864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.261978145s)
	I1217 11:55:17.435439 1972864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.075506487s)
	I1217 11:55:17.435502 1972864 api_server.go:72] duration metric: took 2.528210759s to wait for apiserver process to appear ...
	I1217 11:55:17.435674 1972864 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:55:17.435729 1972864 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 11:55:17.437001 1972864 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-382022 addons enable metrics-server
	
	I1217 11:55:17.440773 1972864 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 11:55:17.440809 1972864 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 11:55:17.443120 1972864 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1217 11:55:14.010760 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:16.511895 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:15.703398 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:18.196308 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:16.262819 1968426 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:55:16.269037 1968426 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 11:55:16.269068 1968426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:55:16.288816 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:55:16.641451 1968426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:55:16.641740 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:16.641765 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-213935 minikube.k8s.io/updated_at=2025_12_17T11_55_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=auto-213935 minikube.k8s.io/primary=true
	I1217 11:55:16.666162 1968426 ops.go:34] apiserver oom_adj: -16
	I1217 11:55:16.779798 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:17.280793 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:17.780773 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:18.280818 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:18.780724 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:19.280002 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:19.780707 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:20.279886 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:20.780546 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:20.849820 1968426 kubeadm.go:1114] duration metric: took 4.208149599s to wait for elevateKubeSystemPrivileges
	I1217 11:55:20.849881 1968426 kubeadm.go:403] duration metric: took 16.084919874s to StartCluster
	I1217 11:55:20.849907 1968426 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:20.849987 1968426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:20.852845 1968426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:20.853190 1968426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:55:20.853184 1968426 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:55:20.853505 1968426 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:55:20.853715 1968426 addons.go:70] Setting storage-provisioner=true in profile "auto-213935"
	I1217 11:55:20.853737 1968426 addons.go:239] Setting addon storage-provisioner=true in "auto-213935"
	I1217 11:55:20.853821 1968426 addons.go:70] Setting default-storageclass=true in profile "auto-213935"
	I1217 11:55:20.853834 1968426 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-213935"
	I1217 11:55:20.853900 1968426 host.go:66] Checking if "auto-213935" exists ...
	I1217 11:55:20.854788 1968426 cli_runner.go:164] Run: docker container inspect auto-213935 --format={{.State.Status}}
	I1217 11:55:20.854908 1968426 config.go:182] Loaded profile config "auto-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:20.855253 1968426 cli_runner.go:164] Run: docker container inspect auto-213935 --format={{.State.Status}}
	I1217 11:55:20.856383 1968426 out.go:179] * Verifying Kubernetes components...
	I1217 11:55:20.859959 1968426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:20.885192 1968426 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:55:17.444238 1972864 addons.go:530] duration metric: took 2.53682973s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 11:55:17.936751 1972864 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 11:55:17.941896 1972864 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1217 11:55:17.943002 1972864 api_server.go:141] control plane version: v1.34.3
	I1217 11:55:17.943033 1972864 api_server.go:131] duration metric: took 507.348152ms to wait for apiserver health ...
	I1217 11:55:17.943044 1972864 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:55:17.946080 1972864 system_pods.go:59] 8 kube-system pods found
	I1217 11:55:17.946129 1972864 system_pods.go:61] "coredns-66bc5c9577-8nz5c" [7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:17.946148 1972864 system_pods.go:61] "etcd-default-k8s-diff-port-382022" [89624998-9d7a-46d1-bb95-95d799e1f333] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:17.946156 1972864 system_pods.go:61] "kindnet-lsrk2" [59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c] Running
	I1217 11:55:17.946161 1972864 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382022" [006f19c1-f459-4182-9d8f-2eade0c6c10e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:17.946167 1972864 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382022" [ace736c2-f536-44c9-9bab-69c24a0714c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:17.946178 1972864 system_pods.go:61] "kube-proxy-ss2p8" [d7f7db01-8945-4a8f-aa14-c6f50ac56824] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:17.946186 1972864 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382022" [703d3040-1a85-4a71-a17e-5043245475fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:17.946195 1972864 system_pods.go:61] "storage-provisioner" [973e9e2c-a15b-4a45-8d2f-955f94325749] Running
	I1217 11:55:17.946206 1972864 system_pods.go:74] duration metric: took 3.151523ms to wait for pod list to return data ...
	I1217 11:55:17.946218 1972864 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:55:17.948580 1972864 default_sa.go:45] found service account: "default"
	I1217 11:55:17.948602 1972864 default_sa.go:55] duration metric: took 2.373002ms for default service account to be created ...
	I1217 11:55:17.948613 1972864 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:55:17.951056 1972864 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:17.951085 1972864 system_pods.go:89] "coredns-66bc5c9577-8nz5c" [7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:17.951094 1972864 system_pods.go:89] "etcd-default-k8s-diff-port-382022" [89624998-9d7a-46d1-bb95-95d799e1f333] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:17.951103 1972864 system_pods.go:89] "kindnet-lsrk2" [59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c] Running
	I1217 11:55:17.951109 1972864 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-382022" [006f19c1-f459-4182-9d8f-2eade0c6c10e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:17.951118 1972864 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-382022" [ace736c2-f536-44c9-9bab-69c24a0714c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:17.951124 1972864 system_pods.go:89] "kube-proxy-ss2p8" [d7f7db01-8945-4a8f-aa14-c6f50ac56824] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:17.951132 1972864 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-382022" [703d3040-1a85-4a71-a17e-5043245475fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:17.951136 1972864 system_pods.go:89] "storage-provisioner" [973e9e2c-a15b-4a45-8d2f-955f94325749] Running
	I1217 11:55:17.951143 1972864 system_pods.go:126] duration metric: took 2.523832ms to wait for k8s-apps to be running ...
	I1217 11:55:17.951158 1972864 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:55:17.951204 1972864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:17.978730 1972864 system_svc.go:56] duration metric: took 27.563396ms WaitForService to wait for kubelet
	I1217 11:55:17.978769 1972864 kubeadm.go:587] duration metric: took 3.071477819s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:17.978809 1972864 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:55:17.981772 1972864 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:55:17.981811 1972864 node_conditions.go:123] node cpu capacity is 8
	I1217 11:55:17.981831 1972864 node_conditions.go:105] duration metric: took 3.01527ms to run NodePressure ...
	I1217 11:55:17.981847 1972864 start.go:242] waiting for startup goroutines ...
	I1217 11:55:17.981857 1972864 start.go:247] waiting for cluster config update ...
	I1217 11:55:17.981873 1972864 start.go:256] writing updated cluster config ...
	I1217 11:55:17.982150 1972864 ssh_runner.go:195] Run: rm -f paused
	I1217 11:55:17.986348 1972864 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:17.990447 1972864 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8nz5c" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 11:55:19.996696 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	I1217 11:55:20.887329 1968426 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:20.887356 1968426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:55:20.887434 1968426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-213935
	I1217 11:55:20.887839 1968426 addons.go:239] Setting addon default-storageclass=true in "auto-213935"
	I1217 11:55:20.887900 1968426 host.go:66] Checking if "auto-213935" exists ...
	I1217 11:55:20.888916 1968426 cli_runner.go:164] Run: docker container inspect auto-213935 --format={{.State.Status}}
	I1217 11:55:20.915669 1968426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34636 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/auto-213935/id_rsa Username:docker}
	I1217 11:55:20.916618 1968426 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:20.916642 1968426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:55:20.916694 1968426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-213935
	I1217 11:55:20.949341 1968426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34636 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/auto-213935/id_rsa Username:docker}
	I1217 11:55:20.973408 1968426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:55:21.038628 1968426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:55:21.042668 1968426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:21.071913 1968426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:21.172319 1968426 start.go:1013] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 11:55:21.173658 1968426 node_ready.go:35] waiting up to 15m0s for node "auto-213935" to be "Ready" ...
	I1217 11:55:21.361677 1968426 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 11:55:19.008895 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:21.011708 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:23.012487 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:20.197307 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:22.697565 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:21.363154 1968426 addons.go:530] duration metric: took 509.647318ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 11:55:21.676506 1968426 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-213935" context rescaled to 1 replicas
	W1217 11:55:23.177425 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:21.997728 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:24.498078 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:25.510927 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:28.009327 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:25.198433 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:27.696164 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:25.178008 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:27.677094 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:26.997192 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:29.496072 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:30.009958 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:32.508570 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:30.196632 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:32.696852 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:29.677249 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:32.177004 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	I1217 11:55:33.510207 1963245 pod_ready.go:94] pod "coredns-7d764666f9-n2kvr" is "Ready"
	I1217 11:55:33.510240 1963245 pod_ready.go:86] duration metric: took 39.006915253s for pod "coredns-7d764666f9-n2kvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.513226 1963245 pod_ready.go:83] waiting for pod "etcd-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.518013 1963245 pod_ready.go:94] pod "etcd-no-preload-737478" is "Ready"
	I1217 11:55:33.518042 1963245 pod_ready.go:86] duration metric: took 4.791962ms for pod "etcd-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.520439 1963245 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.525019 1963245 pod_ready.go:94] pod "kube-apiserver-no-preload-737478" is "Ready"
	I1217 11:55:33.525042 1963245 pod_ready.go:86] duration metric: took 4.576574ms for pod "kube-apiserver-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.527093 1963245 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.707246 1963245 pod_ready.go:94] pod "kube-controller-manager-no-preload-737478" is "Ready"
	I1217 11:55:33.707289 1963245 pod_ready.go:86] duration metric: took 180.171414ms for pod "kube-controller-manager-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.908251 1963245 pod_ready.go:83] waiting for pod "kube-proxy-5tkm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.308267 1963245 pod_ready.go:94] pod "kube-proxy-5tkm8" is "Ready"
	I1217 11:55:34.308294 1963245 pod_ready.go:86] duration metric: took 400.014798ms for pod "kube-proxy-5tkm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.508400 1963245 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.907605 1963245 pod_ready.go:94] pod "kube-scheduler-no-preload-737478" is "Ready"
	I1217 11:55:34.907635 1963245 pod_ready.go:86] duration metric: took 399.204157ms for pod "kube-scheduler-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.907651 1963245 pod_ready.go:40] duration metric: took 40.409789961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:34.953713 1963245 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 11:55:34.955479 1963245 out.go:179] * Done! kubectl is now configured to use "no-preload-737478" cluster and "default" namespace by default
	W1217 11:55:31.496442 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:33.497214 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	I1217 11:55:34.176846 1968426 node_ready.go:49] node "auto-213935" is "Ready"
	I1217 11:55:34.176882 1968426 node_ready.go:38] duration metric: took 13.003193123s for node "auto-213935" to be "Ready" ...
	I1217 11:55:34.176901 1968426 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:55:34.176959 1968426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:55:34.189526 1968426 api_server.go:72] duration metric: took 13.336301645s to wait for apiserver process to appear ...
	I1217 11:55:34.189581 1968426 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:55:34.189603 1968426 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:55:34.194353 1968426 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 11:55:34.195733 1968426 api_server.go:141] control plane version: v1.34.3
	I1217 11:55:34.195766 1968426 api_server.go:131] duration metric: took 6.176107ms to wait for apiserver health ...
	I1217 11:55:34.195777 1968426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:55:34.199311 1968426 system_pods.go:59] 8 kube-system pods found
	I1217 11:55:34.199371 1968426 system_pods.go:61] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:34.199377 1968426 system_pods.go:61] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.199384 1968426 system_pods.go:61] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.199388 1968426 system_pods.go:61] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.199392 1968426 system_pods.go:61] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.199400 1968426 system_pods.go:61] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.199403 1968426 system_pods.go:61] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.199408 1968426 system_pods.go:61] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:34.199415 1968426 system_pods.go:74] duration metric: took 3.630946ms to wait for pod list to return data ...
	I1217 11:55:34.199426 1968426 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:55:34.202301 1968426 default_sa.go:45] found service account: "default"
	I1217 11:55:34.202324 1968426 default_sa.go:55] duration metric: took 2.891984ms for default service account to be created ...
	I1217 11:55:34.202343 1968426 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:55:34.205662 1968426 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:34.205701 1968426 system_pods.go:89] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:34.205708 1968426 system_pods.go:89] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.205715 1968426 system_pods.go:89] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.205721 1968426 system_pods.go:89] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.205725 1968426 system_pods.go:89] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.205729 1968426 system_pods.go:89] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.205733 1968426 system_pods.go:89] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.205738 1968426 system_pods.go:89] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:34.205764 1968426 retry.go:31] will retry after 248.175498ms: missing components: kube-dns
	I1217 11:55:34.457746 1968426 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:34.457785 1968426 system_pods.go:89] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:34.457794 1968426 system_pods.go:89] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.457802 1968426 system_pods.go:89] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.457806 1968426 system_pods.go:89] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.457812 1968426 system_pods.go:89] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.457817 1968426 system_pods.go:89] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.457823 1968426 system_pods.go:89] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.457830 1968426 system_pods.go:89] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:34.457859 1968426 retry.go:31] will retry after 326.462384ms: missing components: kube-dns
	I1217 11:55:34.789392 1968426 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:34.789420 1968426 system_pods.go:89] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Running
	I1217 11:55:34.789426 1968426 system_pods.go:89] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.789429 1968426 system_pods.go:89] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.789433 1968426 system_pods.go:89] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.789438 1968426 system_pods.go:89] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.789445 1968426 system_pods.go:89] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.789450 1968426 system_pods.go:89] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.789454 1968426 system_pods.go:89] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Running
	I1217 11:55:34.789464 1968426 system_pods.go:126] duration metric: took 587.114184ms to wait for k8s-apps to be running ...
	I1217 11:55:34.789478 1968426 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:55:34.789560 1968426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:34.803259 1968426 system_svc.go:56] duration metric: took 13.763269ms WaitForService to wait for kubelet
	I1217 11:55:34.803301 1968426 kubeadm.go:587] duration metric: took 13.950081466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:34.803337 1968426 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:55:34.806420 1968426 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:55:34.806448 1968426 node_conditions.go:123] node cpu capacity is 8
	I1217 11:55:34.806467 1968426 node_conditions.go:105] duration metric: took 3.124048ms to run NodePressure ...
	I1217 11:55:34.806479 1968426 start.go:242] waiting for startup goroutines ...
	I1217 11:55:34.806487 1968426 start.go:247] waiting for cluster config update ...
	I1217 11:55:34.806497 1968426 start.go:256] writing updated cluster config ...
	I1217 11:55:34.806796 1968426 ssh_runner.go:195] Run: rm -f paused
	I1217 11:55:34.811172 1968426 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:34.815270 1968426 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r2wht" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.821953 1968426 pod_ready.go:94] pod "coredns-66bc5c9577-r2wht" is "Ready"
	I1217 11:55:34.821976 1968426 pod_ready.go:86] duration metric: took 6.6841ms for pod "coredns-66bc5c9577-r2wht" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.824088 1968426 pod_ready.go:83] waiting for pod "etcd-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.827935 1968426 pod_ready.go:94] pod "etcd-auto-213935" is "Ready"
	I1217 11:55:34.827960 1968426 pod_ready.go:86] duration metric: took 3.852249ms for pod "etcd-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.829811 1968426 pod_ready.go:83] waiting for pod "kube-apiserver-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.833423 1968426 pod_ready.go:94] pod "kube-apiserver-auto-213935" is "Ready"
	I1217 11:55:34.833444 1968426 pod_ready.go:86] duration metric: took 3.613745ms for pod "kube-apiserver-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.835327 1968426 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:35.215776 1968426 pod_ready.go:94] pod "kube-controller-manager-auto-213935" is "Ready"
	I1217 11:55:35.215807 1968426 pod_ready.go:86] duration metric: took 380.458261ms for pod "kube-controller-manager-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:35.416163 1968426 pod_ready.go:83] waiting for pod "kube-proxy-54kwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:35.816672 1968426 pod_ready.go:94] pod "kube-proxy-54kwh" is "Ready"
	I1217 11:55:35.816705 1968426 pod_ready.go:86] duration metric: took 400.512173ms for pod "kube-proxy-54kwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:36.015880 1968426 pod_ready.go:83] waiting for pod "kube-scheduler-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:36.416249 1968426 pod_ready.go:94] pod "kube-scheduler-auto-213935" is "Ready"
	I1217 11:55:36.416277 1968426 pod_ready.go:86] duration metric: took 400.367181ms for pod "kube-scheduler-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:36.416289 1968426 pod_ready.go:40] duration metric: took 1.605081184s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:36.462404 1968426 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:55:36.464325 1968426 out.go:179] * Done! kubectl is now configured to use "auto-213935" cluster and "default" namespace by default
	W1217 11:55:34.697503 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:37.196552 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:38.197165 1968420 pod_ready.go:94] pod "coredns-66bc5c9577-t66bd" is "Ready"
	I1217 11:55:38.197201 1968420 pod_ready.go:86] duration metric: took 31.506592273s for pod "coredns-66bc5c9577-t66bd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.199718 1968420 pod_ready.go:83] waiting for pod "etcd-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.204687 1968420 pod_ready.go:94] pod "etcd-embed-certs-542273" is "Ready"
	I1217 11:55:38.204716 1968420 pod_ready.go:86] duration metric: took 4.969846ms for pod "etcd-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.207346 1968420 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.212242 1968420 pod_ready.go:94] pod "kube-apiserver-embed-certs-542273" is "Ready"
	I1217 11:55:38.212273 1968420 pod_ready.go:86] duration metric: took 4.899712ms for pod "kube-apiserver-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.214736 1968420 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.395360 1968420 pod_ready.go:94] pod "kube-controller-manager-embed-certs-542273" is "Ready"
	I1217 11:55:38.395391 1968420 pod_ready.go:86] duration metric: took 180.631954ms for pod "kube-controller-manager-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.595230 1968420 pod_ready.go:83] waiting for pod "kube-proxy-gfbw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.995218 1968420 pod_ready.go:94] pod "kube-proxy-gfbw9" is "Ready"
	I1217 11:55:38.995250 1968420 pod_ready.go:86] duration metric: took 399.986048ms for pod "kube-proxy-gfbw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:39.194526 1968420 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:39.595252 1968420 pod_ready.go:94] pod "kube-scheduler-embed-certs-542273" is "Ready"
	I1217 11:55:39.595285 1968420 pod_ready.go:86] duration metric: took 400.717588ms for pod "kube-scheduler-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:39.595302 1968420 pod_ready.go:40] duration metric: took 32.909508699s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:39.642757 1968420 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:55:39.644690 1968420 out.go:179] * Done! kubectl is now configured to use "embed-certs-542273" cluster and "default" namespace by default
	W1217 11:55:35.996074 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:38.496938 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:40.996388 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:43.496698 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 17 11:55:12 no-preload-737478 crio[605]: time="2025-12-17T11:55:12.425833121Z" level=info msg="Started container" PID=1771 containerID=c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper id=b03e6b26-fcd5-417c-b1e6-3fc425a0371b name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b09b5e7103ba9b519897af8233d257b7e46be83c208e3da9adc285dc9cfd4d9
	Dec 17 11:55:12 no-preload-737478 crio[605]: time="2025-12-17T11:55:12.506320657Z" level=info msg="Removing container: d671879f367dd6f63cfa509fb293aece6acd5463eb00e22f603b0e7a7649c0d5" id=d2219b65-6c82-4f8e-b002-37332e32fcc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:12 no-preload-737478 crio[605]: time="2025-12-17T11:55:12.5213633Z" level=info msg="Removed container d671879f367dd6f63cfa509fb293aece6acd5463eb00e22f603b0e7a7649c0d5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper" id=d2219b65-6c82-4f8e-b002-37332e32fcc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.537777982Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8189faca-ab32-403b-b436-7e6d42b41624 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.538837649Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=20991cb0-cf3a-4dc4-b73b-d258ab4a337b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.540250202Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8c6c9514-6581-444d-b33a-6e2ecac5f8bf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.540654363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.546414641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.546630995Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f63d09b051e7d9ed8dd6e0f97d817a1e897c67202815cf5c6427add304af439/merged/etc/passwd: no such file or directory"
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.546681767Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f63d09b051e7d9ed8dd6e0f97d817a1e897c67202815cf5c6427add304af439/merged/etc/group: no such file or directory"
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.547746043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.582053187Z" level=info msg="Created container 1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4: kube-system/storage-provisioner/storage-provisioner" id=8c6c9514-6581-444d-b33a-6e2ecac5f8bf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.582778477Z" level=info msg="Starting container: 1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4" id=1ca3b4d3-b8fd-4801-9932-3b2cacf7df98 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:24 no-preload-737478 crio[605]: time="2025-12-17T11:55:24.585331355Z" level=info msg="Started container" PID=1785 containerID=1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4 description=kube-system/storage-provisioner/storage-provisioner id=1ca3b4d3-b8fd-4801-9932-3b2cacf7df98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0871373db009f4d1d2392604362696bee47f86dff279ecedb95529e5102f9b3
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.374802373Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=98f30f8c-9d6c-417c-beb1-93ce61934326 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.375912247Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9d75f1a2-5153-4c4d-8702-4f9b1387acfd name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.377053825Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper" id=ba994f5e-0023-4637-aed7-9cde3e6142ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.377224198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.383387541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.384039946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.422976594Z" level=info msg="Created container 0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper" id=ba994f5e-0023-4637-aed7-9cde3e6142ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.423692112Z" level=info msg="Starting container: 0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee" id=77ae6bcd-08f6-43a5-a8d7-bec8111958bc name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.425652471Z" level=info msg="Started container" PID=1819 containerID=0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper id=77ae6bcd-08f6-43a5-a8d7-bec8111958bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b09b5e7103ba9b519897af8233d257b7e46be83c208e3da9adc285dc9cfd4d9
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.578470941Z" level=info msg="Removing container: c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372" id=df0a1d84-8710-4655-8638-edfdcd536d36 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:37 no-preload-737478 crio[605]: time="2025-12-17T11:55:37.596696417Z" level=info msg="Removed container c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4/dashboard-metrics-scraper" id=df0a1d84-8710-4655-8638-edfdcd536d36 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0e25eb79c6bd9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   3                   1b09b5e7103ba       dashboard-metrics-scraper-867fb5f87b-lzxn4   kubernetes-dashboard
	1e8a997b4b341       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   e0871373db009       storage-provisioner                          kube-system
	fd3aeebcdab23       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago       Running             kubernetes-dashboard        0                   abb45e930b43a       kubernetes-dashboard-b84665fb8-t9pxx         kubernetes-dashboard
	8c0fc19eb2c75       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           57 seconds ago       Running             coredns                     0                   93150660521ec       coredns-7d764666f9-n2kvr                     kube-system
	9ad911b33c8d7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   a530b9d4febd3       busybox                                      default
	e366a6880a703       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   e0871373db009       storage-provisioner                          kube-system
	c3857941ca2aa       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           57 seconds ago       Running             kindnet-cni                 0                   dd810bab5711c       kindnet-fnspp                                kube-system
	1d2b1bb8a1b76       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           57 seconds ago       Running             kube-proxy                  0                   1961f766473ef       kube-proxy-5tkm8                             kube-system
	dfa862cc6c124       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           About a minute ago   Running             etcd                        0                   4ac0cb416820f       etcd-no-preload-737478                       kube-system
	8e9c526071331       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           About a minute ago   Running             kube-apiserver              0                   94b32c89adeba       kube-apiserver-no-preload-737478             kube-system
	59ffeef8ed703       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           About a minute ago   Running             kube-scheduler              0                   52322dcee8c2a       kube-scheduler-no-preload-737478             kube-system
	2927eecd91f4b       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           About a minute ago   Running             kube-controller-manager     0                   a76158e34139f       kube-controller-manager-no-preload-737478    kube-system
	
	
	==> coredns [8c0fc19eb2c75cce822364a4b57ad3d996f36be504d773da1f4a3833e438910b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44889 - 24894 "HINFO IN 8960180135453485830.4829056896470198371. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030488511s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-737478
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-737478
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=no-preload-737478
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_53_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:53:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-737478
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:55:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:55:34 +0000   Wed, 17 Dec 2025 11:53:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:55:34 +0000   Wed, 17 Dec 2025 11:53:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:55:34 +0000   Wed, 17 Dec 2025 11:53:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:55:34 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-737478
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                247c8806-279e-4c7a-81b2-36bc1da2ec08
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-7d764666f9-n2kvr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     115s
	  kube-system                 etcd-no-preload-737478                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-fnspp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-no-preload-737478              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-737478     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-5tkm8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-no-preload-737478              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-lzxn4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-t9pxx          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  117s  node-controller  Node no-preload-737478 event: Registered Node no-preload-737478 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-737478 event: Registered Node no-preload-737478 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [dfa862cc6c124cbff58725fd6b60cb1a8b9eefcaf56e3fc283931533b497b6f9] <==
	{"level":"info","ts":"2025-12-17T11:54:50.949171Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T11:54:50.949229Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T11:54:50.949556Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-17T11:54:50.949580Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-17T11:54:50.949915Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-17T11:54:50.949997Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T11:54:50.950075Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T11:54:51.938425Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:51.938523Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:51.938622Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T11:54:51.938649Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:54:51.938666Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:51.939582Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:51.939620Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T11:54:51.939641Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:51.939649Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T11:54:51.941013Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:54:51.941031Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T11:54:51.941011Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-737478 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T11:54:51.941299Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T11:54:51.941317Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T11:54:51.943337Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:54:51.943406Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T11:54:51.945580Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T11:54:51.945580Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 11:55:51 up  5:38,  0 user,  load average: 5.37, 4.13, 2.59
	Linux no-preload-737478 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c3857941ca2aae221674eea310f456831e8d058f682132b671e62d0c96c1fc17] <==
	I1217 11:54:54.015212       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:54:54.015580       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 11:54:54.015779       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:54:54.015802       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:54:54.015824       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:54:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:54:54.218812       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:54:54.219068       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:54:54.219101       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:54:54.219219       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:54:54.710964       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:54:54.711012       1 metrics.go:72] Registering metrics
	I1217 11:54:54.711086       1 controller.go:711] "Syncing nftables rules"
	I1217 11:55:04.218707       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:55:04.218821       1 main.go:301] handling current node
	I1217 11:55:14.218675       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:55:14.218718       1 main.go:301] handling current node
	I1217 11:55:24.218275       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:55:24.218321       1 main.go:301] handling current node
	I1217 11:55:34.217714       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:55:34.217758       1 main.go:301] handling current node
	I1217 11:55:44.221046       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 11:55:44.221084       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8e9c5260713310721b633c55bf538fd5250281666a4f79e7afb0e39f48e8752a] <==
	I1217 11:54:52.918886       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 11:54:52.918892       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 11:54:52.919600       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 11:54:52.919984       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 11:54:52.920059       1 aggregator.go:187] initial CRD sync complete...
	I1217 11:54:52.920091       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 11:54:52.920130       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:54:52.920156       1 cache.go:39] Caches are synced for autoregister controller
	I1217 11:54:52.925711       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 11:54:52.926520       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 11:54:52.966585       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:52.966611       1 policy_source.go:248] refreshing policies
	I1217 11:54:52.977558       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:54:52.978080       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:54:53.205837       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:54:53.238525       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:54:53.262133       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:54:53.272889       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:54:53.282355       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:54:53.339806       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.104.102"}
	I1217 11:54:53.355795       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.95.82"}
	I1217 11:54:53.823323       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 11:54:56.549515       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:54:56.600566       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:54:56.700081       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2927eecd91f4b36c104d665f79cbb47dbc7e16d7f360c6a4e4e977b70d7eaf43] <==
	I1217 11:54:56.053030       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.053052       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.056677       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.058905       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:56.059803       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.060061       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.060123       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.060190       1 range_allocator.go:177] "Sending events to api server"
	I1217 11:54:56.060255       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 11:54:56.060281       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:56.061281       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.060494       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.061807       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 11:54:56.061959       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-737478"
	I1217 11:54:56.062083       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 11:54:56.061443       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.060376       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.061367       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.061417       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.061464       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.061396       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.156913       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:56.156931       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 11:54:56.156936       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 11:54:56.159051       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [1d2b1bb8a1b76843007cd338cd29ad6ab7ffd7691330930addf1432fa7421ec5] <==
	I1217 11:54:53.795822       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:54:53.872114       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:53.972713       1 shared_informer.go:377] "Caches are synced"
	I1217 11:54:53.972760       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 11:54:53.972895       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:54:53.999441       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:54:53.999517       1 server_linux.go:136] "Using iptables Proxier"
	I1217 11:54:54.006735       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:54:54.007281       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 11:54:54.007448       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:54.009461       1 config.go:200] "Starting service config controller"
	I1217 11:54:54.011252       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:54:54.009632       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:54:54.009663       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:54:54.012073       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:54:54.011503       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:54:54.009828       1 config.go:309] "Starting node config controller"
	I1217 11:54:54.012612       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:54:54.012663       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:54:54.112434       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 11:54:54.112528       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:54:54.112562       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [59ffeef8ed7039998fb2d90ffdb8f586577c7fac1aeca5d33293a0883dcf6fe1] <==
	I1217 11:54:51.234050       1 serving.go:386] Generated self-signed cert in-memory
	W1217 11:54:52.838367       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 11:54:52.839273       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 11:54:52.839337       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 11:54:52.839349       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 11:54:52.884744       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 11:54:52.884909       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:54:52.888160       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 11:54:52.888312       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:54:52.888324       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 11:54:52.888341       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 11:54:52.905483       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 11:54:52.905492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1217 11:54:52.989044       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 11:55:10 no-preload-737478 kubelet[751]: E1217 11:55:10.193771     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lzxn4_kubernetes-dashboard(62869478-ce02-4a64-bfee-d8127455619f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" podUID="62869478-ce02-4a64-bfee-d8127455619f"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: E1217 11:55:12.373877     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: I1217 11:55:12.373913     751 scope.go:122] "RemoveContainer" containerID="d671879f367dd6f63cfa509fb293aece6acd5463eb00e22f603b0e7a7649c0d5"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: I1217 11:55:12.499034     751 scope.go:122] "RemoveContainer" containerID="d671879f367dd6f63cfa509fb293aece6acd5463eb00e22f603b0e7a7649c0d5"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: E1217 11:55:12.499322     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: I1217 11:55:12.499369     751 scope.go:122] "RemoveContainer" containerID="c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372"
	Dec 17 11:55:12 no-preload-737478 kubelet[751]: E1217 11:55:12.499588     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lzxn4_kubernetes-dashboard(62869478-ce02-4a64-bfee-d8127455619f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" podUID="62869478-ce02-4a64-bfee-d8127455619f"
	Dec 17 11:55:20 no-preload-737478 kubelet[751]: E1217 11:55:20.193738     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:20 no-preload-737478 kubelet[751]: I1217 11:55:20.193793     751 scope.go:122] "RemoveContainer" containerID="c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372"
	Dec 17 11:55:20 no-preload-737478 kubelet[751]: E1217 11:55:20.194036     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lzxn4_kubernetes-dashboard(62869478-ce02-4a64-bfee-d8127455619f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" podUID="62869478-ce02-4a64-bfee-d8127455619f"
	Dec 17 11:55:24 no-preload-737478 kubelet[751]: I1217 11:55:24.537262     751 scope.go:122] "RemoveContainer" containerID="e366a6880a7038192225e1a0e3f1dfae39b7b0e063b30315983cee12d05f0372"
	Dec 17 11:55:33 no-preload-737478 kubelet[751]: E1217 11:55:33.013663     751 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n2kvr" containerName="coredns"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: E1217 11:55:37.374150     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: I1217 11:55:37.374185     751 scope.go:122] "RemoveContainer" containerID="c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: I1217 11:55:37.575595     751 scope.go:122] "RemoveContainer" containerID="c723d7097a488a1d158ce208aa222014035dfa30a712ffec548122603e962372"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: E1217 11:55:37.575941     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: I1217 11:55:37.575964     751 scope.go:122] "RemoveContainer" containerID="0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee"
	Dec 17 11:55:37 no-preload-737478 kubelet[751]: E1217 11:55:37.576138     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lzxn4_kubernetes-dashboard(62869478-ce02-4a64-bfee-d8127455619f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" podUID="62869478-ce02-4a64-bfee-d8127455619f"
	Dec 17 11:55:40 no-preload-737478 kubelet[751]: E1217 11:55:40.192932     751 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" containerName="dashboard-metrics-scraper"
	Dec 17 11:55:40 no-preload-737478 kubelet[751]: I1217 11:55:40.192975     751 scope.go:122] "RemoveContainer" containerID="0e25eb79c6bd9f7b950880206e2210f3a124ab05f022c1e06157b45454f7a2ee"
	Dec 17 11:55:40 no-preload-737478 kubelet[751]: E1217 11:55:40.193142     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lzxn4_kubernetes-dashboard(62869478-ce02-4a64-bfee-d8127455619f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lzxn4" podUID="62869478-ce02-4a64-bfee-d8127455619f"
	Dec 17 11:55:47 no-preload-737478 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:55:47 no-preload-737478 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:55:47 no-preload-737478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:47 no-preload-737478 systemd[1]: kubelet.service: Consumed 1.896s CPU time.
	
	
	==> kubernetes-dashboard [fd3aeebcdab235840cddfdf9ae02671bf3de7091045cb6660338a7cb39e126c4] <==
	2025/12/17 11:55:06 Using namespace: kubernetes-dashboard
	2025/12/17 11:55:06 Using in-cluster config to connect to apiserver
	2025/12/17 11:55:06 Using secret token for csrf signing
	2025/12/17 11:55:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 11:55:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 11:55:06 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/17 11:55:06 Generating JWE encryption key
	2025/12/17 11:55:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 11:55:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 11:55:06 Initializing JWE encryption key from synchronized object
	2025/12/17 11:55:06 Creating in-cluster Sidecar client
	2025/12/17 11:55:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 11:55:06 Serving insecurely on HTTP port: 9090
	2025/12/17 11:55:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 11:55:06 Starting overwatch
	
	
	==> storage-provisioner [1e8a997b4b3411e7721f834867e01bee25d4a16e675ce73f50efbe10de7ad3f4] <==
	I1217 11:55:24.600306       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:55:24.610257       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:55:24.610352       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 11:55:24.612920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:28.069330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:32.330160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:35.929052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:38.982581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:42.004896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:42.010199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:55:42.010495       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:55:42.010604       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c69f3844-a665-403c-a70c-0a1934605a75", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-737478_6107e95e-6e8d-4fc9-bbbb-93a57b6f037b became leader
	I1217 11:55:42.010681       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-737478_6107e95e-6e8d-4fc9-bbbb-93a57b6f037b!
	W1217 11:55:42.012852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:42.016612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:55:42.111728       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-737478_6107e95e-6e8d-4fc9-bbbb-93a57b6f037b!
	W1217 11:55:44.019606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:44.024069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:46.028126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:46.034600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:48.038421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:48.042404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:50.046291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:50.051354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e366a6880a7038192225e1a0e3f1dfae39b7b0e063b30315983cee12d05f0372] <==
	I1217 11:54:53.762287       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 11:55:23.766960       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737478 -n no-preload-737478
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737478 -n no-preload-737478: exit status 2 (365.649707ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-737478 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-542273 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-542273 --alsologtostderr -v=1: exit status 80 (1.974987345s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-542273 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:55:51.416654 1979086 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:55:51.416776 1979086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:51.416785 1979086 out.go:374] Setting ErrFile to fd 2...
	I1217 11:55:51.416789 1979086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:51.417017 1979086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:55:51.417289 1979086 out.go:368] Setting JSON to false
	I1217 11:55:51.417316 1979086 mustload.go:66] Loading cluster: embed-certs-542273
	I1217 11:55:51.417697 1979086 config.go:182] Loaded profile config "embed-certs-542273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:51.418089 1979086 cli_runner.go:164] Run: docker container inspect embed-certs-542273 --format={{.State.Status}}
	I1217 11:55:51.439566 1979086 host.go:66] Checking if "embed-certs-542273" exists ...
	I1217 11:55:51.439873 1979086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:51.502556 1979086 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-17 11:55:51.491754857 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:55:51.503257 1979086 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-542273 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 11:55:51.505238 1979086 out.go:179] * Pausing node embed-certs-542273 ... 
	I1217 11:55:51.506321 1979086 host.go:66] Checking if "embed-certs-542273" exists ...
	I1217 11:55:51.506690 1979086 ssh_runner.go:195] Run: systemctl --version
	I1217 11:55:51.506745 1979086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542273
	I1217 11:55:51.528708 1979086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34631 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/embed-certs-542273/id_rsa Username:docker}
	I1217 11:55:51.629158 1979086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:51.645856 1979086 pause.go:52] kubelet running: true
	I1217 11:55:51.645928 1979086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:55:51.841905 1979086 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:55:51.842054 1979086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:55:51.923577 1979086 cri.go:89] found id: "1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57"
	I1217 11:55:51.923605 1979086 cri.go:89] found id: "4273013d360ec8c8e165713eb420e127b9ac50d03a71760a379b7d109d56ca70"
	I1217 11:55:51.923611 1979086 cri.go:89] found id: "c4da15d668f5e0c2ba173770df24ded1614df1b9ae6d62a4056fbf6f97e50172"
	I1217 11:55:51.923617 1979086 cri.go:89] found id: "7d0e62b7ae832719e32aa2f113a172f5c8b5acb0f58b8130262b9b16ff577d71"
	I1217 11:55:51.923623 1979086 cri.go:89] found id: "f398beed018faf9bbc2e0cce3ebe9161b6148e792e45e5cf0f77341e02476b82"
	I1217 11:55:51.923627 1979086 cri.go:89] found id: "dfe482616e84293a27eb3b23ada5a5a0ed3f7b9365e8582247b4ebc8ecd21761"
	I1217 11:55:51.923632 1979086 cri.go:89] found id: "66e8eb832ab4f5366549961c0b2bb218b272bf70168a1b853d7a1ea9895c604d"
	I1217 11:55:51.923641 1979086 cri.go:89] found id: "a0c2e003388306e1709cba308307c9c32f132cc9f51622dfcf37e31be663ef38"
	I1217 11:55:51.923645 1979086 cri.go:89] found id: "519a5111a4600b89107c3202de3f67b9bc492c3b2f1e0cd7846625b575c28310"
	I1217 11:55:51.923664 1979086 cri.go:89] found id: "61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076"
	I1217 11:55:51.923673 1979086 cri.go:89] found id: "24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70"
	I1217 11:55:51.923678 1979086 cri.go:89] found id: ""
	I1217 11:55:51.923724 1979086 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:55:51.940669 1979086 retry.go:31] will retry after 368.203718ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:51Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:55:52.309218 1979086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:52.324692 1979086 pause.go:52] kubelet running: false
	I1217 11:55:52.324743 1979086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:55:52.492972 1979086 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:55:52.493045 1979086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:55:52.578897 1979086 cri.go:89] found id: "1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57"
	I1217 11:55:52.578922 1979086 cri.go:89] found id: "4273013d360ec8c8e165713eb420e127b9ac50d03a71760a379b7d109d56ca70"
	I1217 11:55:52.578927 1979086 cri.go:89] found id: "c4da15d668f5e0c2ba173770df24ded1614df1b9ae6d62a4056fbf6f97e50172"
	I1217 11:55:52.578932 1979086 cri.go:89] found id: "7d0e62b7ae832719e32aa2f113a172f5c8b5acb0f58b8130262b9b16ff577d71"
	I1217 11:55:52.578936 1979086 cri.go:89] found id: "f398beed018faf9bbc2e0cce3ebe9161b6148e792e45e5cf0f77341e02476b82"
	I1217 11:55:52.578942 1979086 cri.go:89] found id: "dfe482616e84293a27eb3b23ada5a5a0ed3f7b9365e8582247b4ebc8ecd21761"
	I1217 11:55:52.578947 1979086 cri.go:89] found id: "66e8eb832ab4f5366549961c0b2bb218b272bf70168a1b853d7a1ea9895c604d"
	I1217 11:55:52.578951 1979086 cri.go:89] found id: "a0c2e003388306e1709cba308307c9c32f132cc9f51622dfcf37e31be663ef38"
	I1217 11:55:52.578955 1979086 cri.go:89] found id: "519a5111a4600b89107c3202de3f67b9bc492c3b2f1e0cd7846625b575c28310"
	I1217 11:55:52.578964 1979086 cri.go:89] found id: "61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076"
	I1217 11:55:52.578974 1979086 cri.go:89] found id: "24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70"
	I1217 11:55:52.578979 1979086 cri.go:89] found id: ""
	I1217 11:55:52.579029 1979086 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:55:52.593323 1979086 retry.go:31] will retry after 450.031422ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:55:53.043701 1979086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:53.057742 1979086 pause.go:52] kubelet running: false
	I1217 11:55:53.057804 1979086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:55:53.227475 1979086 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:55:53.227578 1979086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:55:53.297925 1979086 cri.go:89] found id: "1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57"
	I1217 11:55:53.297951 1979086 cri.go:89] found id: "4273013d360ec8c8e165713eb420e127b9ac50d03a71760a379b7d109d56ca70"
	I1217 11:55:53.297958 1979086 cri.go:89] found id: "c4da15d668f5e0c2ba173770df24ded1614df1b9ae6d62a4056fbf6f97e50172"
	I1217 11:55:53.297971 1979086 cri.go:89] found id: "7d0e62b7ae832719e32aa2f113a172f5c8b5acb0f58b8130262b9b16ff577d71"
	I1217 11:55:53.297976 1979086 cri.go:89] found id: "f398beed018faf9bbc2e0cce3ebe9161b6148e792e45e5cf0f77341e02476b82"
	I1217 11:55:53.297981 1979086 cri.go:89] found id: "dfe482616e84293a27eb3b23ada5a5a0ed3f7b9365e8582247b4ebc8ecd21761"
	I1217 11:55:53.297986 1979086 cri.go:89] found id: "66e8eb832ab4f5366549961c0b2bb218b272bf70168a1b853d7a1ea9895c604d"
	I1217 11:55:53.297991 1979086 cri.go:89] found id: "a0c2e003388306e1709cba308307c9c32f132cc9f51622dfcf37e31be663ef38"
	I1217 11:55:53.297996 1979086 cri.go:89] found id: "519a5111a4600b89107c3202de3f67b9bc492c3b2f1e0cd7846625b575c28310"
	I1217 11:55:53.298007 1979086 cri.go:89] found id: "61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076"
	I1217 11:55:53.298012 1979086 cri.go:89] found id: "24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70"
	I1217 11:55:53.298015 1979086 cri.go:89] found id: ""
	I1217 11:55:53.298052 1979086 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:55:53.312727 1979086 out.go:203] 
	W1217 11:55:53.314172 1979086 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:55:53.314199 1979086 out.go:285] * 
	* 
	W1217 11:55:53.321351 1979086 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:55:53.322694 1979086 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-542273 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-542273
helpers_test.go:244: (dbg) docker inspect embed-certs-542273:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c",
	        "Created": "2025-12-17T11:53:42.422221245Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1968979,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:54:54.402301783Z",
	            "FinishedAt": "2025-12-17T11:54:53.21201382Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/hosts",
	        "LogPath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c-json.log",
	        "Name": "/embed-certs-542273",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-542273:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-542273",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c",
	                "LowerDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-542273",
	                "Source": "/var/lib/docker/volumes/embed-certs-542273/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-542273",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-542273",
	                "name.minikube.sigs.k8s.io": "embed-certs-542273",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0ec644fd21e5ff45e68d383a8ce3644af96c2fce65b0c252c0207c5d785e5334",
	            "SandboxKey": "/var/run/docker/netns/0ec644fd21e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34631"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34632"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34635"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34633"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34634"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-542273": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d402fb644edc9023d8248c192d3a2f7035874f1b3b272648cd1fc766ab85445",
	                    "EndpointID": "475b93239a6e00a94e48bd524dd2c61965174f3062c499a41e742e6ca705e136",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ba:f3:54:0a:c5:c6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-542273",
	                        "b1f11181a02b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-542273 -n embed-certs-542273
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-542273 -n embed-certs-542273: exit status 2 (344.150863ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-542273 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-542273 logs -n 25: (1.356421634s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p embed-certs-542273 --alsologtostderr -v=3                                                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-737478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1             │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ image   │ newest-cni-601829 image list --format=json                                                                                                                               │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ pause   │ -p newest-cni-601829 --alsologtostderr -v=1                                                                                                                              │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-382022 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ delete  │ -p newest-cni-601829                                                                                                                                                     │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ delete  │ -p newest-cni-601829                                                                                                                                                     │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable dashboard -p embed-certs-542273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                   │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p auto-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                  │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 pgrep -a kubelet                                                                                                                                          │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ image   │ no-preload-737478 image list --format=json                                                                                                                               │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ pause   │ -p no-preload-737478 --alsologtostderr -v=1                                                                                                                              │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ image   │ embed-certs-542273 image list --format=json                                                                                                                              │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ pause   │ -p embed-certs-542273 --alsologtostderr -v=1                                                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo cat /etc/nsswitch.conf                                                                                                                               │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ ssh     │ -p auto-213935 sudo cat /etc/hosts                                                                                                                                       │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ delete  │ -p no-preload-737478                                                                                                                                                     │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo cat /etc/resolv.conf                                                                                                                                 │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ ssh     │ -p auto-213935 sudo crictl pods                                                                                                                                          │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ ssh     │ -p auto-213935 sudo crictl ps --all                                                                                                                                      │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:55:05
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:55:05.915015 1972864 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:55:05.915174 1972864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:05.915182 1972864 out.go:374] Setting ErrFile to fd 2...
	I1217 11:55:05.915188 1972864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:05.915474 1972864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:55:05.916077 1972864 out.go:368] Setting JSON to false
	I1217 11:55:05.917928 1972864 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20251,"bootTime":1765952255,"procs":433,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:55:05.918012 1972864 start.go:143] virtualization: kvm guest
	I1217 11:55:05.920036 1972864 out.go:179] * [default-k8s-diff-port-382022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:55:05.921753 1972864 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:55:05.921776 1972864 notify.go:221] Checking for updates...
	I1217 11:55:05.924500 1972864 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:55:05.926029 1972864 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:05.927481 1972864 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:55:05.928660 1972864 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:55:05.930205 1972864 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:55:05.932089 1972864 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:05.932942 1972864 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:55:05.966016 1972864 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:55:05.966214 1972864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:06.051196 1972864 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:55:06.035134766 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:55:06.051362 1972864 docker.go:319] overlay module found
	I1217 11:55:06.053307 1972864 out.go:179] * Using the docker driver based on existing profile
	I1217 11:55:06.055121 1972864 start.go:309] selected driver: docker
	I1217 11:55:06.055187 1972864 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:06.055310 1972864 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:55:06.056083 1972864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:06.137330 1972864 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:55:06.123341974 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:55:06.137759 1972864 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:06.137803 1972864 cni.go:84] Creating CNI manager for ""
	I1217 11:55:06.137872 1972864 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:55:06.137919 1972864 start.go:353] cluster config:
	{Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:05.330562 1968420 node_ready.go:49] node "embed-certs-542273" is "Ready"
	I1217 11:55:05.330602 1968420 node_ready.go:38] duration metric: took 2.307590665s for node "embed-certs-542273" to be "Ready" ...
	I1217 11:55:05.330621 1968420 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:55:05.330685 1968420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:55:06.139050 1968420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.109697945s)
	I1217 11:55:06.139117 1968420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.099340031s)
	I1217 11:55:06.139286 1968420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.913743017s)
	I1217 11:55:06.139473 1972864 out.go:179] * Starting "default-k8s-diff-port-382022" primary control-plane node in "default-k8s-diff-port-382022" cluster
	I1217 11:55:06.139346 1968420 api_server.go:72] duration metric: took 3.348259497s to wait for apiserver process to appear ...
	I1217 11:55:06.139489 1968420 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:55:06.139509 1968420 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 11:55:06.140802 1968420 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-542273 addons enable metrics-server
	
	I1217 11:55:06.140797 1972864 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:55:06.141926 1972864 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:55:06.144623 1968420 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 11:55:06.144646 1968420 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 11:55:06.155204 1968420 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 11:55:06.143128 1972864 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:55:06.143168 1972864 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:55:06.143181 1972864 cache.go:65] Caching tarball of preloaded images
	I1217 11:55:06.143222 1972864 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:55:06.143289 1972864 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:55:06.143302 1972864 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 11:55:06.143455 1972864 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/config.json ...
	I1217 11:55:06.170086 1972864 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:55:06.170124 1972864 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:55:06.170144 1972864 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:55:06.170183 1972864 start.go:360] acquireMachinesLock for default-k8s-diff-port-382022: {Name:mkc3ede9873fa3c6fdab76bd3c88723bee4b3785 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:06.170258 1972864 start.go:364] duration metric: took 50.675µs to acquireMachinesLock for "default-k8s-diff-port-382022"
	I1217 11:55:06.170281 1972864 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:55:06.170291 1972864 fix.go:54] fixHost starting: 
	I1217 11:55:06.170622 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:06.191065 1972864 fix.go:112] recreateIfNeeded on default-k8s-diff-port-382022: state=Stopped err=<nil>
	W1217 11:55:06.191102 1972864 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 11:55:05.010946 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:07.509563 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	I1217 11:55:06.156501 1968420 addons.go:530] duration metric: took 3.364667334s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 11:55:06.639611 1968420 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 11:55:06.645152 1968420 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 11:55:06.646332 1968420 api_server.go:141] control plane version: v1.34.3
	I1217 11:55:06.646360 1968420 api_server.go:131] duration metric: took 506.863143ms to wait for apiserver health ...
	I1217 11:55:06.646370 1968420 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:55:06.653410 1968420 system_pods.go:59] 8 kube-system pods found
	I1217 11:55:06.653442 1968420 system_pods.go:61] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:06.653453 1968420 system_pods.go:61] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:06.653461 1968420 system_pods.go:61] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 11:55:06.653469 1968420 system_pods.go:61] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:06.653477 1968420 system_pods.go:61] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:06.653484 1968420 system_pods.go:61] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:06.653496 1968420 system_pods.go:61] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:06.653502 1968420 system_pods.go:61] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:06.653514 1968420 system_pods.go:74] duration metric: took 7.137942ms to wait for pod list to return data ...
	I1217 11:55:06.653523 1968420 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:55:06.656046 1968420 default_sa.go:45] found service account: "default"
	I1217 11:55:06.656064 1968420 default_sa.go:55] duration metric: took 2.535516ms for default service account to be created ...
	I1217 11:55:06.656073 1968420 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:55:06.658845 1968420 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:06.658872 1968420 system_pods.go:89] "coredns-66bc5c9577-t66bd" [12ccdad4-eb85-447a-b66a-5b9df90b40e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:06.658881 1968420 system_pods.go:89] "etcd-embed-certs-542273" [a68f013e-780c-446f-aba0-4fa41be1f816] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:06.658890 1968420 system_pods.go:89] "kindnet-lvlhs" [79e10c76-fde0-4f9b-b7c2-7fa3bb3ede3a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 11:55:06.658898 1968420 system_pods.go:89] "kube-apiserver-embed-certs-542273" [83af3b24-65ce-4e77-80a6-cdcd38da76fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:06.658912 1968420 system_pods.go:89] "kube-controller-manager-embed-certs-542273" [d4d42fc5-7192-48c2-8fc8-ad76adbcee34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:06.658920 1968420 system_pods.go:89] "kube-proxy-gfbw9" [409200b4-d7e2-4aa0-87f9-64c6f73e93c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:06.658936 1968420 system_pods.go:89] "kube-scheduler-embed-certs-542273" [181fdb3e-6ae0-4912-8855-a2a62d97459e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:06.658943 1968420 system_pods.go:89] "storage-provisioner" [88cd3e31-ccf4-442e-9f0e-e1abc10069b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:06.658952 1968420 system_pods.go:126] duration metric: took 2.874094ms to wait for k8s-apps to be running ...
	I1217 11:55:06.658961 1968420 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:55:06.659011 1968420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:06.674841 1968420 system_svc.go:56] duration metric: took 15.867236ms WaitForService to wait for kubelet
	I1217 11:55:06.674874 1968420 kubeadm.go:587] duration metric: took 3.883790125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:06.674896 1968420 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:55:06.679469 1968420 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:55:06.679504 1968420 node_conditions.go:123] node cpu capacity is 8
	I1217 11:55:06.679524 1968420 node_conditions.go:105] duration metric: took 4.620965ms to run NodePressure ...
	I1217 11:55:06.679551 1968420 start.go:242] waiting for startup goroutines ...
	I1217 11:55:06.679561 1968420 start.go:247] waiting for cluster config update ...
	I1217 11:55:06.679575 1968420 start.go:256] writing updated cluster config ...
	I1217 11:55:06.679934 1968420 ssh_runner.go:195] Run: rm -f paused
	I1217 11:55:06.685757 1968420 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:06.690580 1968420 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t66bd" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 11:55:08.696479 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:05.221867 1968426 out.go:252]   - Generating certificates and keys ...
	I1217 11:55:05.222013 1968426 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:55:05.222143 1968426 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:55:05.515027 1968426 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:55:05.840693 1968426 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:55:06.051969 1968426 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:55:06.488194 1968426 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:55:07.147959 1968426 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:55:07.148173 1968426 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-213935 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:55:07.452899 1968426 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:55:07.453095 1968426 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-213935 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:55:07.556891 1968426 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:55:07.863151 1968426 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:55:07.920730 1968426 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:55:07.920839 1968426 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:55:08.231818 1968426 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:55:08.551353 1968426 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:55:08.710825 1968426 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:55:08.929825 1968426 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:55:09.189615 1968426 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:55:09.190223 1968426 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:55:09.194170 1968426 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:55:06.193174 1972864 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-382022" ...
	I1217 11:55:06.193264 1972864 cli_runner.go:164] Run: docker start default-k8s-diff-port-382022
	I1217 11:55:06.526174 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:06.552214 1972864 kic.go:430] container "default-k8s-diff-port-382022" state is running.
	I1217 11:55:06.552760 1972864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:55:06.577698 1972864 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/config.json ...
	I1217 11:55:06.577964 1972864 machine.go:94] provisionDockerMachine start ...
	I1217 11:55:06.578041 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:06.598700 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:06.599024 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:06.599042 1972864 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:55:06.599663 1972864 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45878->127.0.0.1:34641: read: connection reset by peer
	I1217 11:55:09.755152 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382022
	
	I1217 11:55:09.755203 1972864 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-382022"
	I1217 11:55:09.755274 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:09.782521 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:09.782860 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:09.782881 1972864 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-382022 && echo "default-k8s-diff-port-382022" | sudo tee /etc/hostname
	I1217 11:55:09.951044 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382022
	
	I1217 11:55:09.951162 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:09.977929 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:09.978252 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:09.978284 1972864 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-382022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-382022/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-382022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:55:10.136717 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:55:10.136752 1972864 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:55:10.136778 1972864 ubuntu.go:190] setting up certificates
	I1217 11:55:10.136791 1972864 provision.go:84] configureAuth start
	I1217 11:55:10.136861 1972864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:55:10.163824 1972864 provision.go:143] copyHostCerts
	I1217 11:55:10.163910 1972864 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:55:10.163932 1972864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:55:10.164006 1972864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:55:10.164229 1972864 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:55:10.164249 1972864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:55:10.164313 1972864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:55:10.164476 1972864 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:55:10.164491 1972864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:55:10.164547 1972864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:55:10.164663 1972864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-382022 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-382022 localhost minikube]
	I1217 11:55:10.370953 1972864 provision.go:177] copyRemoteCerts
	I1217 11:55:10.371025 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:55:10.371104 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:10.396183 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:10.499250 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:55:10.523707 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 11:55:10.547625 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:55:10.573466 1972864 provision.go:87] duration metric: took 436.653063ms to configureAuth
	I1217 11:55:10.573501 1972864 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:55:10.573749 1972864 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:10.573882 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:10.597313 1972864 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:10.597651 1972864 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34641 <nil> <nil>}
	I1217 11:55:10.597694 1972864 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:55:11.011460 1972864 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:55:11.011491 1972864 machine.go:97] duration metric: took 4.43350855s to provisionDockerMachine
	I1217 11:55:11.011507 1972864 start.go:293] postStartSetup for "default-k8s-diff-port-382022" (driver="docker")
	I1217 11:55:11.011519 1972864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:55:11.011621 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:55:11.011686 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:11.034079 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:11.141182 1972864 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:55:11.145913 1972864 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:55:11.145947 1972864 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:55:11.145962 1972864 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:55:11.146017 1972864 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:55:11.146109 1972864 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:55:11.146199 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:55:11.157064 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:55:11.177488 1972864 start.go:296] duration metric: took 165.962986ms for postStartSetup
	I1217 11:55:11.177607 1972864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:55:11.177653 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:11.204846 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:11.308658 1972864 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:55:11.315100 1972864 fix.go:56] duration metric: took 5.144801074s for fixHost
	I1217 11:55:11.315129 1972864 start.go:83] releasing machines lock for "default-k8s-diff-port-382022", held for 5.144858234s
	I1217 11:55:11.315199 1972864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-382022
	I1217 11:55:11.338745 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:55:11.338818 1972864 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:55:11.338829 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:55:11.338879 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:55:11.338917 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:55:11.338953 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:55:11.339012 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:55:11.339099 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:55:11.339164 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:11.365430 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:11.495096 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:55:11.525235 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:55:11.552785 1972864 ssh_runner.go:195] Run: openssl version
	I1217 11:55:11.562436 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.573636 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:55:11.584676 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.589300 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.589361 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:55:11.641338 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:55:11.652096 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.661866 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:55:11.674821 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.680734 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.680803 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:11.729519 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:55:11.740678 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.750972 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:55:11.760660 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.766240 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.766330 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:55:11.818447 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:55:11.829562 1972864 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:55:11.834749 1972864 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:55:11.840877 1972864 ssh_runner.go:195] Run: cat /version.json
	I1217 11:55:11.840989 1972864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:55:11.929334 1972864 ssh_runner.go:195] Run: systemctl --version
	I1217 11:55:11.937487 1972864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:55:11.993284 1972864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:55:11.999774 1972864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:55:11.999914 1972864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:55:12.011051 1972864 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 11:55:12.011078 1972864 start.go:496] detecting cgroup driver to use...
	I1217 11:55:12.011113 1972864 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:55:12.011160 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:55:12.031108 1972864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:55:12.049252 1972864 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:55:12.049318 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:55:12.069726 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:55:12.085078 1972864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:55:12.209963 1972864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:55:12.316465 1972864 docker.go:234] disabling docker service ...
	I1217 11:55:12.316548 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:55:12.333455 1972864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:55:12.348978 1972864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:55:12.457995 1972864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:55:12.596548 1972864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:55:12.612573 1972864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:55:12.628307 1972864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:55:12.628394 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.646387 1972864 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:55:12.646626 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.694485 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.704956 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.714913 1972864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:55:12.723885 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.734227 1972864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.746830 1972864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:55:12.760588 1972864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:55:12.773376 1972864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:55:12.783526 1972864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:12.888361 1972864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:55:13.205918 1972864 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:55:13.205985 1972864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:55:13.210225 1972864 start.go:564] Will wait 60s for crictl version
	I1217 11:55:13.210287 1972864 ssh_runner.go:195] Run: which crictl
	I1217 11:55:13.214055 1972864 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:55:13.241923 1972864 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:55:13.242004 1972864 ssh_runner.go:195] Run: crio --version
	I1217 11:55:13.272236 1972864 ssh_runner.go:195] Run: crio --version
	I1217 11:55:13.311001 1972864 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1217 11:55:09.509773 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:11.511479 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:11.202390 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:13.702094 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:09.198065 1968426 out.go:252]   - Booting up control plane ...
	I1217 11:55:09.198177 1968426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:55:09.198291 1968426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:55:09.198350 1968426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:55:09.212059 1968426 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:55:09.212187 1968426 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:55:09.221646 1968426 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:55:09.222066 1968426 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:55:09.222112 1968426 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:55:09.330911 1968426 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:55:09.331064 1968426 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:55:10.335948 1968426 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0019159s
	I1217 11:55:10.337319 1968426 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 11:55:10.337604 1968426 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1217 11:55:10.337743 1968426 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 11:55:10.337845 1968426 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 11:55:12.778929 1968426 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.441400896s
	I1217 11:55:12.814618 1968426 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.476747339s
	I1217 11:55:13.313210 1972864 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-382022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:55:13.337041 1972864 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:55:13.342732 1972864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:55:13.357159 1972864 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:55:13.357335 1972864 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:55:13.357405 1972864 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:55:13.401031 1972864 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:55:13.401061 1972864 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:55:13.401124 1972864 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:55:13.435767 1972864 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:55:13.435795 1972864 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:55:13.435805 1972864 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.3 crio true true} ...
	I1217 11:55:13.435950 1972864 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-382022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:55:13.436036 1972864 ssh_runner.go:195] Run: crio config
	I1217 11:55:13.501779 1972864 cni.go:84] Creating CNI manager for ""
	I1217 11:55:13.501805 1972864 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:55:13.501824 1972864 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:55:13.501855 1972864 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-382022 NodeName:default-k8s-diff-port-382022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:55:13.502039 1972864 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-382022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:55:13.502129 1972864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:55:13.513932 1972864 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:55:13.514003 1972864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:55:13.524819 1972864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 11:55:13.541119 1972864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:55:13.557811 1972864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 11:55:13.576185 1972864 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:55:13.581146 1972864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:55:13.594763 1972864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:13.713132 1972864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:55:13.740075 1972864 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022 for IP: 192.168.76.2
	I1217 11:55:13.740104 1972864 certs.go:195] generating shared ca certs ...
	I1217 11:55:13.740126 1972864 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:13.740330 1972864 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:55:13.740393 1972864 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:55:13.740406 1972864 certs.go:257] generating profile certs ...
	I1217 11:55:13.740497 1972864 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/client.key
	I1217 11:55:13.740635 1972864 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key.e7b7ff3a
	I1217 11:55:13.740721 1972864 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key
	I1217 11:55:13.740846 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:55:13.740880 1972864 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:55:13.740887 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:55:13.740911 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:55:13.740934 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:55:13.740955 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:55:13.740993 1972864 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:55:13.741867 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:55:13.773747 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:55:13.804586 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:55:13.834707 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:55:13.869625 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 11:55:13.898355 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:55:13.922845 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:55:13.947273 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/default-k8s-diff-port-382022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 11:55:13.972061 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:55:14.001446 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:55:14.027589 1972864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:55:14.054132 1972864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:55:14.073337 1972864 ssh_runner.go:195] Run: openssl version
	I1217 11:55:14.082156 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.092983 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:55:14.103945 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.109736 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.109811 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:55:14.172322 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:55:14.182999 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.194351 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:55:14.210464 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.216214 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.216730 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:55:14.276197 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:55:14.287490 1972864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.299145 1972864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:55:14.312656 1972864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.319064 1972864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.319132 1972864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:55:14.383567 1972864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:55:14.400321 1972864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:55:14.410392 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 11:55:14.479493 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 11:55:14.544631 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 11:55:14.604881 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 11:55:14.664836 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 11:55:14.723985 1972864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 11:55:14.789590 1972864 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-382022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382022 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:14.789736 1972864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:55:14.789811 1972864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:55:14.835987 1972864 cri.go:89] found id: "8a177f28a91aaa2beb33f612bda7e08cb55f517dc85cb28db4600fd97f28c910"
	I1217 11:55:14.836014 1972864 cri.go:89] found id: "b89ae3816c4a84a75d80384f2ac0ba58aaba5961009d2b0e4689a33fd8bee8c7"
	I1217 11:55:14.836031 1972864 cri.go:89] found id: "7b920b07dddb55c17343ecbdc9f777396c3b3e9c983a17164746d7f9865e23b0"
	I1217 11:55:14.836036 1972864 cri.go:89] found id: "6133fb2263ed69eedfc718e57501b70033d65802ca78d796131ff5830a512466"
	I1217 11:55:14.836040 1972864 cri.go:89] found id: ""
	I1217 11:55:14.836091 1972864 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 11:55:14.856976 1972864 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:55:14Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:55:14.857081 1972864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:55:14.871120 1972864 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 11:55:14.871143 1972864 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 11:55:14.871281 1972864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 11:55:14.883106 1972864 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:55:14.884356 1972864 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-382022" does not appear in /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:14.885194 1972864 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-1669348/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-382022" cluster setting kubeconfig missing "default-k8s-diff-port-382022" context setting]
	I1217 11:55:14.886458 1972864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:14.889149 1972864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 11:55:14.903999 1972864 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 11:55:14.904040 1972864 kubeadm.go:602] duration metric: took 32.890057ms to restartPrimaryControlPlane
	I1217 11:55:14.904052 1972864 kubeadm.go:403] duration metric: took 114.480546ms to StartCluster
	I1217 11:55:14.904073 1972864 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:14.904147 1972864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:14.906555 1972864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:14.907164 1972864 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:14.907249 1972864 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:55:14.907415 1972864 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:55:14.907508 1972864 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-382022"
	I1217 11:55:14.907527 1972864 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-382022"
	W1217 11:55:14.907570 1972864 addons.go:248] addon storage-provisioner should already be in state true
	I1217 11:55:14.907602 1972864 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:55:14.908114 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.908339 1972864 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-382022"
	I1217 11:55:14.908366 1972864 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-382022"
	W1217 11:55:14.908413 1972864 addons.go:248] addon dashboard should already be in state true
	I1217 11:55:14.908464 1972864 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:55:14.908988 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.909258 1972864 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-382022"
	I1217 11:55:14.909280 1972864 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-382022"
	I1217 11:55:14.909613 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.912097 1972864 out.go:179] * Verifying Kubernetes components...
	I1217 11:55:14.838768 1968426 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501405436s
	I1217 11:55:14.866719 1968426 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:55:14.879181 1968426 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:55:14.897113 1968426 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:55:14.897628 1968426 kubeadm.go:319] [mark-control-plane] Marking the node auto-213935 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:55:14.933522 1968426 kubeadm.go:319] [bootstrap-token] Using token: xj4v1d.49m4e5gs1ckj0agu
	I1217 11:55:14.914078 1972864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:14.941643 1972864 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 11:55:14.941685 1972864 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:55:14.943971 1972864 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:14.943992 1972864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:55:14.944059 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:14.944144 1972864 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 11:55:14.944150 1972864 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-382022"
	W1217 11:55:14.944168 1972864 addons.go:248] addon default-storageclass should already be in state true
	I1217 11:55:14.944231 1972864 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:55:14.944732 1972864 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:55:14.945181 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 11:55:14.945206 1972864 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 11:55:14.945256 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:14.985758 1972864 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:14.985961 1972864 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:55:14.986156 1972864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:55:14.989510 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:14.991637 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:15.024797 1972864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:55:15.126636 1972864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:55:15.136146 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 11:55:15.136172 1972864 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 11:55:15.137704 1972864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:15.145951 1972864 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-382022" to be "Ready" ...
	I1217 11:55:15.164090 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 11:55:15.164119 1972864 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 11:55:15.173306 1972864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:15.187105 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 11:55:15.187135 1972864 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 11:55:15.211988 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 11:55:15.212013 1972864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 11:55:15.235388 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 11:55:15.235420 1972864 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 11:55:15.261315 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 11:55:15.261346 1972864 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 11:55:15.282989 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 11:55:15.283043 1972864 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 11:55:15.310716 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 11:55:15.310761 1972864 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 11:55:15.336860 1972864 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:55:15.336898 1972864 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 11:55:15.359900 1972864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:55:14.936461 1968426 out.go:252]   - Configuring RBAC rules ...
	I1217 11:55:14.936651 1968426 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:55:14.943579 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:55:14.957528 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:55:14.965463 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:55:14.973724 1968426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:55:14.980763 1968426 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:55:15.247181 1968426 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:55:15.680248 1968426 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:55:16.249887 1968426 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:55:16.251461 1968426 kubeadm.go:319] 
	I1217 11:55:16.252502 1968426 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:55:16.252519 1968426 kubeadm.go:319] 
	I1217 11:55:16.252621 1968426 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:55:16.252626 1968426 kubeadm.go:319] 
	I1217 11:55:16.252655 1968426 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:55:16.252727 1968426 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:55:16.252784 1968426 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:55:16.252789 1968426 kubeadm.go:319] 
	I1217 11:55:16.252861 1968426 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:55:16.252866 1968426 kubeadm.go:319] 
	I1217 11:55:16.252924 1968426 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:55:16.252929 1968426 kubeadm.go:319] 
	I1217 11:55:16.252987 1968426 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:55:16.253073 1968426 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:55:16.253152 1968426 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:55:16.253156 1968426 kubeadm.go:319] 
	I1217 11:55:16.253254 1968426 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:55:16.253341 1968426 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:55:16.253346 1968426 kubeadm.go:319] 
	I1217 11:55:16.253707 1968426 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xj4v1d.49m4e5gs1ckj0agu \
	I1217 11:55:16.253911 1968426 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 \
	I1217 11:55:16.253969 1968426 kubeadm.go:319] 	--control-plane 
	I1217 11:55:16.253985 1968426 kubeadm.go:319] 
	I1217 11:55:16.254174 1968426 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:55:16.254236 1968426 kubeadm.go:319] 
	I1217 11:55:16.254382 1968426 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xj4v1d.49m4e5gs1ckj0agu \
	I1217 11:55:16.254589 1968426 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72ca69e79565938747b3b933a6bdf5232dfea68313e6b67b2ce298f81b785832 
	I1217 11:55:16.256944 1968426 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 11:55:16.257103 1968426 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:55:16.257146 1968426 cni.go:84] Creating CNI manager for ""
	I1217 11:55:16.257166 1968426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:55:16.261095 1968426 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 11:55:16.649970 1972864 node_ready.go:49] node "default-k8s-diff-port-382022" is "Ready"
	I1217 11:55:16.650012 1972864 node_ready.go:38] duration metric: took 1.504027618s for node "default-k8s-diff-port-382022" to be "Ready" ...
	I1217 11:55:16.650047 1972864 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:55:16.650124 1972864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:55:17.435218 1972864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.297480989s)
	I1217 11:55:17.435331 1972864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.261978145s)
	I1217 11:55:17.435439 1972864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.075506487s)
	I1217 11:55:17.435502 1972864 api_server.go:72] duration metric: took 2.528210759s to wait for apiserver process to appear ...
	I1217 11:55:17.435674 1972864 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:55:17.435729 1972864 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 11:55:17.437001 1972864 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-382022 addons enable metrics-server
	
	I1217 11:55:17.440773 1972864 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 11:55:17.440809 1972864 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 11:55:17.443120 1972864 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1217 11:55:14.010760 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:16.511895 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:15.703398 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:18.196308 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:16.262819 1968426 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 11:55:16.269037 1968426 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 11:55:16.269068 1968426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 11:55:16.288816 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 11:55:16.641451 1968426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:55:16.641740 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:16.641765 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-213935 minikube.k8s.io/updated_at=2025_12_17T11_55_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=auto-213935 minikube.k8s.io/primary=true
	I1217 11:55:16.666162 1968426 ops.go:34] apiserver oom_adj: -16
	I1217 11:55:16.779798 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:17.280793 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:17.780773 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:18.280818 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:18.780724 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:19.280002 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:19.780707 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:20.279886 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:20.780546 1968426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:55:20.849820 1968426 kubeadm.go:1114] duration metric: took 4.208149599s to wait for elevateKubeSystemPrivileges
	I1217 11:55:20.849881 1968426 kubeadm.go:403] duration metric: took 16.084919874s to StartCluster
	I1217 11:55:20.849907 1968426 settings.go:142] acquiring lock: {Name:mk7fc93e9fddaaeadd60bee615765ca903926ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:20.849987 1968426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:20.852845 1968426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/kubeconfig: {Name:mk261d3801288153d891c5b602c6c12e45a77448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:20.853190 1968426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:55:20.853184 1968426 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:55:20.853505 1968426 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:55:20.853715 1968426 addons.go:70] Setting storage-provisioner=true in profile "auto-213935"
	I1217 11:55:20.853737 1968426 addons.go:239] Setting addon storage-provisioner=true in "auto-213935"
	I1217 11:55:20.853821 1968426 addons.go:70] Setting default-storageclass=true in profile "auto-213935"
	I1217 11:55:20.853834 1968426 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-213935"
	I1217 11:55:20.853900 1968426 host.go:66] Checking if "auto-213935" exists ...
	I1217 11:55:20.854788 1968426 cli_runner.go:164] Run: docker container inspect auto-213935 --format={{.State.Status}}
	I1217 11:55:20.854908 1968426 config.go:182] Loaded profile config "auto-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:20.855253 1968426 cli_runner.go:164] Run: docker container inspect auto-213935 --format={{.State.Status}}
	I1217 11:55:20.856383 1968426 out.go:179] * Verifying Kubernetes components...
	I1217 11:55:20.859959 1968426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:55:20.885192 1968426 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:55:17.444238 1972864 addons.go:530] duration metric: took 2.53682973s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 11:55:17.936751 1972864 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 11:55:17.941896 1972864 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1217 11:55:17.943002 1972864 api_server.go:141] control plane version: v1.34.3
	I1217 11:55:17.943033 1972864 api_server.go:131] duration metric: took 507.348152ms to wait for apiserver health ...
	I1217 11:55:17.943044 1972864 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:55:17.946080 1972864 system_pods.go:59] 8 kube-system pods found
	I1217 11:55:17.946129 1972864 system_pods.go:61] "coredns-66bc5c9577-8nz5c" [7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:17.946148 1972864 system_pods.go:61] "etcd-default-k8s-diff-port-382022" [89624998-9d7a-46d1-bb95-95d799e1f333] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:17.946156 1972864 system_pods.go:61] "kindnet-lsrk2" [59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c] Running
	I1217 11:55:17.946161 1972864 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382022" [006f19c1-f459-4182-9d8f-2eade0c6c10e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:17.946167 1972864 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382022" [ace736c2-f536-44c9-9bab-69c24a0714c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:17.946178 1972864 system_pods.go:61] "kube-proxy-ss2p8" [d7f7db01-8945-4a8f-aa14-c6f50ac56824] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:17.946186 1972864 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382022" [703d3040-1a85-4a71-a17e-5043245475fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:17.946195 1972864 system_pods.go:61] "storage-provisioner" [973e9e2c-a15b-4a45-8d2f-955f94325749] Running
	I1217 11:55:17.946206 1972864 system_pods.go:74] duration metric: took 3.151523ms to wait for pod list to return data ...
	I1217 11:55:17.946218 1972864 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:55:17.948580 1972864 default_sa.go:45] found service account: "default"
	I1217 11:55:17.948602 1972864 default_sa.go:55] duration metric: took 2.373002ms for default service account to be created ...
	I1217 11:55:17.948613 1972864 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:55:17.951056 1972864 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:17.951085 1972864 system_pods.go:89] "coredns-66bc5c9577-8nz5c" [7c8b1b28-b3d5-4b10-9c3f-e2ae41829d1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:17.951094 1972864 system_pods.go:89] "etcd-default-k8s-diff-port-382022" [89624998-9d7a-46d1-bb95-95d799e1f333] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 11:55:17.951103 1972864 system_pods.go:89] "kindnet-lsrk2" [59fc80a3-14c0-4b2b-9b4d-b8fd3f38337c] Running
	I1217 11:55:17.951109 1972864 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-382022" [006f19c1-f459-4182-9d8f-2eade0c6c10e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 11:55:17.951118 1972864 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-382022" [ace736c2-f536-44c9-9bab-69c24a0714c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 11:55:17.951124 1972864 system_pods.go:89] "kube-proxy-ss2p8" [d7f7db01-8945-4a8f-aa14-c6f50ac56824] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 11:55:17.951132 1972864 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-382022" [703d3040-1a85-4a71-a17e-5043245475fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 11:55:17.951136 1972864 system_pods.go:89] "storage-provisioner" [973e9e2c-a15b-4a45-8d2f-955f94325749] Running
	I1217 11:55:17.951143 1972864 system_pods.go:126] duration metric: took 2.523832ms to wait for k8s-apps to be running ...
	I1217 11:55:17.951158 1972864 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:55:17.951204 1972864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:17.978730 1972864 system_svc.go:56] duration metric: took 27.563396ms WaitForService to wait for kubelet
	I1217 11:55:17.978769 1972864 kubeadm.go:587] duration metric: took 3.071477819s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:17.978809 1972864 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:55:17.981772 1972864 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:55:17.981811 1972864 node_conditions.go:123] node cpu capacity is 8
	I1217 11:55:17.981831 1972864 node_conditions.go:105] duration metric: took 3.01527ms to run NodePressure ...
	I1217 11:55:17.981847 1972864 start.go:242] waiting for startup goroutines ...
	I1217 11:55:17.981857 1972864 start.go:247] waiting for cluster config update ...
	I1217 11:55:17.981873 1972864 start.go:256] writing updated cluster config ...
	I1217 11:55:17.982150 1972864 ssh_runner.go:195] Run: rm -f paused
	I1217 11:55:17.986348 1972864 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:17.990447 1972864 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8nz5c" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 11:55:19.996696 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	I1217 11:55:20.887329 1968426 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:20.887356 1968426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:55:20.887434 1968426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-213935
	I1217 11:55:20.887839 1968426 addons.go:239] Setting addon default-storageclass=true in "auto-213935"
	I1217 11:55:20.887900 1968426 host.go:66] Checking if "auto-213935" exists ...
	I1217 11:55:20.888916 1968426 cli_runner.go:164] Run: docker container inspect auto-213935 --format={{.State.Status}}
	I1217 11:55:20.915669 1968426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34636 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/auto-213935/id_rsa Username:docker}
	I1217 11:55:20.916618 1968426 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:20.916642 1968426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:55:20.916694 1968426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-213935
	I1217 11:55:20.949341 1968426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34636 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/auto-213935/id_rsa Username:docker}
	I1217 11:55:20.973408 1968426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:55:21.038628 1968426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:55:21.042668 1968426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:55:21.071913 1968426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:55:21.172319 1968426 start.go:1013] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 11:55:21.173658 1968426 node_ready.go:35] waiting up to 15m0s for node "auto-213935" to be "Ready" ...
	I1217 11:55:21.361677 1968426 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 11:55:19.008895 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:21.011708 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:23.012487 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:20.197307 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:22.697565 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:21.363154 1968426 addons.go:530] duration metric: took 509.647318ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 11:55:21.676506 1968426 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-213935" context rescaled to 1 replicas
	W1217 11:55:23.177425 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:21.997728 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:24.498078 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:25.510927 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:28.009327 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:25.198433 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:27.696164 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:25.178008 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:27.677094 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:26.997192 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:29.496072 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:30.009958 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:32.508570 1963245 pod_ready.go:104] pod "coredns-7d764666f9-n2kvr" is not "Ready", error: <nil>
	W1217 11:55:30.196632 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:32.696852 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:29.677249 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	W1217 11:55:32.177004 1968426 node_ready.go:57] node "auto-213935" has "Ready":"False" status (will retry)
	I1217 11:55:33.510207 1963245 pod_ready.go:94] pod "coredns-7d764666f9-n2kvr" is "Ready"
	I1217 11:55:33.510240 1963245 pod_ready.go:86] duration metric: took 39.006915253s for pod "coredns-7d764666f9-n2kvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.513226 1963245 pod_ready.go:83] waiting for pod "etcd-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.518013 1963245 pod_ready.go:94] pod "etcd-no-preload-737478" is "Ready"
	I1217 11:55:33.518042 1963245 pod_ready.go:86] duration metric: took 4.791962ms for pod "etcd-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.520439 1963245 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.525019 1963245 pod_ready.go:94] pod "kube-apiserver-no-preload-737478" is "Ready"
	I1217 11:55:33.525042 1963245 pod_ready.go:86] duration metric: took 4.576574ms for pod "kube-apiserver-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.527093 1963245 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.707246 1963245 pod_ready.go:94] pod "kube-controller-manager-no-preload-737478" is "Ready"
	I1217 11:55:33.707289 1963245 pod_ready.go:86] duration metric: took 180.171414ms for pod "kube-controller-manager-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:33.908251 1963245 pod_ready.go:83] waiting for pod "kube-proxy-5tkm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.308267 1963245 pod_ready.go:94] pod "kube-proxy-5tkm8" is "Ready"
	I1217 11:55:34.308294 1963245 pod_ready.go:86] duration metric: took 400.014798ms for pod "kube-proxy-5tkm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.508400 1963245 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.907605 1963245 pod_ready.go:94] pod "kube-scheduler-no-preload-737478" is "Ready"
	I1217 11:55:34.907635 1963245 pod_ready.go:86] duration metric: took 399.204157ms for pod "kube-scheduler-no-preload-737478" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.907651 1963245 pod_ready.go:40] duration metric: took 40.409789961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:34.953713 1963245 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 11:55:34.955479 1963245 out.go:179] * Done! kubectl is now configured to use "no-preload-737478" cluster and "default" namespace by default
	W1217 11:55:31.496442 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:33.497214 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	I1217 11:55:34.176846 1968426 node_ready.go:49] node "auto-213935" is "Ready"
	I1217 11:55:34.176882 1968426 node_ready.go:38] duration metric: took 13.003193123s for node "auto-213935" to be "Ready" ...
	I1217 11:55:34.176901 1968426 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:55:34.176959 1968426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:55:34.189526 1968426 api_server.go:72] duration metric: took 13.336301645s to wait for apiserver process to appear ...
	I1217 11:55:34.189581 1968426 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:55:34.189603 1968426 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 11:55:34.194353 1968426 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 11:55:34.195733 1968426 api_server.go:141] control plane version: v1.34.3
	I1217 11:55:34.195766 1968426 api_server.go:131] duration metric: took 6.176107ms to wait for apiserver health ...
	I1217 11:55:34.195777 1968426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:55:34.199311 1968426 system_pods.go:59] 8 kube-system pods found
	I1217 11:55:34.199371 1968426 system_pods.go:61] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:34.199377 1968426 system_pods.go:61] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.199384 1968426 system_pods.go:61] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.199388 1968426 system_pods.go:61] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.199392 1968426 system_pods.go:61] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.199400 1968426 system_pods.go:61] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.199403 1968426 system_pods.go:61] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.199408 1968426 system_pods.go:61] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:34.199415 1968426 system_pods.go:74] duration metric: took 3.630946ms to wait for pod list to return data ...
	I1217 11:55:34.199426 1968426 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:55:34.202301 1968426 default_sa.go:45] found service account: "default"
	I1217 11:55:34.202324 1968426 default_sa.go:55] duration metric: took 2.891984ms for default service account to be created ...
	I1217 11:55:34.202343 1968426 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:55:34.205662 1968426 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:34.205701 1968426 system_pods.go:89] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:34.205708 1968426 system_pods.go:89] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.205715 1968426 system_pods.go:89] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.205721 1968426 system_pods.go:89] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.205725 1968426 system_pods.go:89] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.205729 1968426 system_pods.go:89] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.205733 1968426 system_pods.go:89] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.205738 1968426 system_pods.go:89] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:34.205764 1968426 retry.go:31] will retry after 248.175498ms: missing components: kube-dns
	I1217 11:55:34.457746 1968426 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:34.457785 1968426 system_pods.go:89] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:55:34.457794 1968426 system_pods.go:89] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.457802 1968426 system_pods.go:89] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.457806 1968426 system_pods.go:89] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.457812 1968426 system_pods.go:89] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.457817 1968426 system_pods.go:89] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.457823 1968426 system_pods.go:89] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.457830 1968426 system_pods.go:89] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 11:55:34.457859 1968426 retry.go:31] will retry after 326.462384ms: missing components: kube-dns
	I1217 11:55:34.789392 1968426 system_pods.go:86] 8 kube-system pods found
	I1217 11:55:34.789420 1968426 system_pods.go:89] "coredns-66bc5c9577-r2wht" [09fa9f78-a6fd-44a9-8000-231571287ca6] Running
	I1217 11:55:34.789426 1968426 system_pods.go:89] "etcd-auto-213935" [c99f8bbd-52f2-4ec7-a465-3262c0730c5f] Running
	I1217 11:55:34.789429 1968426 system_pods.go:89] "kindnet-648cv" [63412a74-18b6-40c9-8acd-6aa0dd310b10] Running
	I1217 11:55:34.789433 1968426 system_pods.go:89] "kube-apiserver-auto-213935" [e5b2b2ed-1ff4-4175-bc7f-25eeaeac890d] Running
	I1217 11:55:34.789438 1968426 system_pods.go:89] "kube-controller-manager-auto-213935" [20a10014-ba85-49b4-8c23-f30c806c8774] Running
	I1217 11:55:34.789445 1968426 system_pods.go:89] "kube-proxy-54kwh" [a09afdb6-59c1-408f-8129-c9cca45b3228] Running
	I1217 11:55:34.789450 1968426 system_pods.go:89] "kube-scheduler-auto-213935" [e7b19cd8-22da-4f45-b856-a32e647aeef8] Running
	I1217 11:55:34.789454 1968426 system_pods.go:89] "storage-provisioner" [bbe2c744-44a7-4053-86b5-31f2b0486973] Running
	I1217 11:55:34.789464 1968426 system_pods.go:126] duration metric: took 587.114184ms to wait for k8s-apps to be running ...
	I1217 11:55:34.789478 1968426 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:55:34.789560 1968426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:55:34.803259 1968426 system_svc.go:56] duration metric: took 13.763269ms WaitForService to wait for kubelet
	I1217 11:55:34.803301 1968426 kubeadm.go:587] duration metric: took 13.950081466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:34.803337 1968426 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:55:34.806420 1968426 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 11:55:34.806448 1968426 node_conditions.go:123] node cpu capacity is 8
	I1217 11:55:34.806467 1968426 node_conditions.go:105] duration metric: took 3.124048ms to run NodePressure ...
	I1217 11:55:34.806479 1968426 start.go:242] waiting for startup goroutines ...
	I1217 11:55:34.806487 1968426 start.go:247] waiting for cluster config update ...
	I1217 11:55:34.806497 1968426 start.go:256] writing updated cluster config ...
	I1217 11:55:34.806796 1968426 ssh_runner.go:195] Run: rm -f paused
	I1217 11:55:34.811172 1968426 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:34.815270 1968426 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r2wht" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.821953 1968426 pod_ready.go:94] pod "coredns-66bc5c9577-r2wht" is "Ready"
	I1217 11:55:34.821976 1968426 pod_ready.go:86] duration metric: took 6.6841ms for pod "coredns-66bc5c9577-r2wht" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.824088 1968426 pod_ready.go:83] waiting for pod "etcd-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.827935 1968426 pod_ready.go:94] pod "etcd-auto-213935" is "Ready"
	I1217 11:55:34.827960 1968426 pod_ready.go:86] duration metric: took 3.852249ms for pod "etcd-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.829811 1968426 pod_ready.go:83] waiting for pod "kube-apiserver-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.833423 1968426 pod_ready.go:94] pod "kube-apiserver-auto-213935" is "Ready"
	I1217 11:55:34.833444 1968426 pod_ready.go:86] duration metric: took 3.613745ms for pod "kube-apiserver-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:34.835327 1968426 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:35.215776 1968426 pod_ready.go:94] pod "kube-controller-manager-auto-213935" is "Ready"
	I1217 11:55:35.215807 1968426 pod_ready.go:86] duration metric: took 380.458261ms for pod "kube-controller-manager-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:35.416163 1968426 pod_ready.go:83] waiting for pod "kube-proxy-54kwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:35.816672 1968426 pod_ready.go:94] pod "kube-proxy-54kwh" is "Ready"
	I1217 11:55:35.816705 1968426 pod_ready.go:86] duration metric: took 400.512173ms for pod "kube-proxy-54kwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:36.015880 1968426 pod_ready.go:83] waiting for pod "kube-scheduler-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:36.416249 1968426 pod_ready.go:94] pod "kube-scheduler-auto-213935" is "Ready"
	I1217 11:55:36.416277 1968426 pod_ready.go:86] duration metric: took 400.367181ms for pod "kube-scheduler-auto-213935" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:36.416289 1968426 pod_ready.go:40] duration metric: took 1.605081184s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:36.462404 1968426 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:55:36.464325 1968426 out.go:179] * Done! kubectl is now configured to use "auto-213935" cluster and "default" namespace by default
	W1217 11:55:34.697503 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	W1217 11:55:37.196552 1968420 pod_ready.go:104] pod "coredns-66bc5c9577-t66bd" is not "Ready", error: <nil>
	I1217 11:55:38.197165 1968420 pod_ready.go:94] pod "coredns-66bc5c9577-t66bd" is "Ready"
	I1217 11:55:38.197201 1968420 pod_ready.go:86] duration metric: took 31.506592273s for pod "coredns-66bc5c9577-t66bd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.199718 1968420 pod_ready.go:83] waiting for pod "etcd-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.204687 1968420 pod_ready.go:94] pod "etcd-embed-certs-542273" is "Ready"
	I1217 11:55:38.204716 1968420 pod_ready.go:86] duration metric: took 4.969846ms for pod "etcd-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.207346 1968420 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.212242 1968420 pod_ready.go:94] pod "kube-apiserver-embed-certs-542273" is "Ready"
	I1217 11:55:38.212273 1968420 pod_ready.go:86] duration metric: took 4.899712ms for pod "kube-apiserver-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.214736 1968420 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.395360 1968420 pod_ready.go:94] pod "kube-controller-manager-embed-certs-542273" is "Ready"
	I1217 11:55:38.395391 1968420 pod_ready.go:86] duration metric: took 180.631954ms for pod "kube-controller-manager-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.595230 1968420 pod_ready.go:83] waiting for pod "kube-proxy-gfbw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:38.995218 1968420 pod_ready.go:94] pod "kube-proxy-gfbw9" is "Ready"
	I1217 11:55:38.995250 1968420 pod_ready.go:86] duration metric: took 399.986048ms for pod "kube-proxy-gfbw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:39.194526 1968420 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:39.595252 1968420 pod_ready.go:94] pod "kube-scheduler-embed-certs-542273" is "Ready"
	I1217 11:55:39.595285 1968420 pod_ready.go:86] duration metric: took 400.717588ms for pod "kube-scheduler-embed-certs-542273" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:39.595302 1968420 pod_ready.go:40] duration metric: took 32.909508699s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:55:39.642757 1968420 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:55:39.644690 1968420 out.go:179] * Done! kubectl is now configured to use "embed-certs-542273" cluster and "default" namespace by default
	W1217 11:55:35.996074 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:38.496938 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:40.996388 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:43.496698 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:45.997050 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:48.496867 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 17 11:55:17 embed-certs-542273 crio[601]: time="2025-12-17T11:55:17.246021307Z" level=info msg="Created container 24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l444/kubernetes-dashboard" id=c0285e8d-a097-4488-81b9-c084c63574ec name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:17 embed-certs-542273 crio[601]: time="2025-12-17T11:55:17.247841858Z" level=info msg="Starting container: 24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70" id=75cea8da-54ed-4d5e-8568-fb6c50146349 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:17 embed-certs-542273 crio[601]: time="2025-12-17T11:55:17.250812923Z" level=info msg="Started container" PID=1756 containerID=24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l444/kubernetes-dashboard id=75cea8da-54ed-4d5e-8568-fb6c50146349 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6210742959cf6f6a00f18dc08c3e5b474e175e8146ef96872a1d45e58e75d606
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.897695627Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a47be1bb-8d7e-42ef-ae3d-91269277d8ea name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.900328259Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ae405c15-058f-4d00-a244-a38ecb9c6b79 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.90505667Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz/dashboard-metrics-scraper" id=7b6d7abc-33ac-4d9e-a41a-8f32a4cd8fe0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.905212824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.911579965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.91209142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.942351784Z" level=info msg="Created container 61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz/dashboard-metrics-scraper" id=7b6d7abc-33ac-4d9e-a41a-8f32a4cd8fe0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.943053945Z" level=info msg="Starting container: 61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076" id=8338d3a0-7d88-45b8-b132-959ee36d40ec name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.944869266Z" level=info msg="Started container" PID=1775 containerID=61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz/dashboard-metrics-scraper id=8338d3a0-7d88-45b8-b132-959ee36d40ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=a0501bf3a2b4dbb422394a128586713b05e755ada0cf505ccea7b6e06fa3c11f
	Dec 17 11:55:29 embed-certs-542273 crio[601]: time="2025-12-17T11:55:29.041927603Z" level=info msg="Removing container: d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0" id=af1d3ec2-64e5-4a9e-a888-46badc9cf197 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:29 embed-certs-542273 crio[601]: time="2025-12-17T11:55:29.051504882Z" level=info msg="Removed container d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz/dashboard-metrics-scraper" id=af1d3ec2-64e5-4a9e-a888-46badc9cf197 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.065892334Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ddfae3d-4b0a-4e4e-804b-2466efc35121 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.066987891Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=87b12512-bc35-4a31-abd9-0e87bb3b91bf name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.068120721Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d858a023-73d8-4b27-8ff5-b42c78d4b30d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.06826427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.07273472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.072928269Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/04476c09502c536d63da40167b7d751489f6cb6e699806e46412f20c5c826884/merged/etc/passwd: no such file or directory"
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.072964527Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/04476c09502c536d63da40167b7d751489f6cb6e699806e46412f20c5c826884/merged/etc/group: no such file or directory"
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.073274502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.095810835Z" level=info msg="Created container 1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57: kube-system/storage-provisioner/storage-provisioner" id=d858a023-73d8-4b27-8ff5-b42c78d4b30d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.096434127Z" level=info msg="Starting container: 1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57" id=f24bc66f-43ad-4033-b73c-a5f1b572b7eb name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.098362665Z" level=info msg="Started container" PID=1789 containerID=1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57 description=kube-system/storage-provisioner/storage-provisioner id=f24bc66f-43ad-4033-b73c-a5f1b572b7eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=bce4bcd96368206ea7c45472b6abcafba2d66a593a0139960d138b31db85d686
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1373da4a0aa89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   bce4bcd963682       storage-provisioner                          kube-system
	61d34e9aca683       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   a0501bf3a2b4d       dashboard-metrics-scraper-6ffb444bf9-p46mz   kubernetes-dashboard
	24be3a7f600c5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   6210742959cf6       kubernetes-dashboard-855c9754f9-4l444        kubernetes-dashboard
	a64004b155ed2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   1880c996f5aae       busybox                                      default
	4273013d360ec       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   46643ebca44ab       coredns-66bc5c9577-t66bd                     kube-system
	c4da15d668f5e       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           48 seconds ago      Running             kindnet-cni                 0                   9758ecad63387       kindnet-lvlhs                                kube-system
	7d0e62b7ae832       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   bce4bcd963682       storage-provisioner                          kube-system
	f398beed018fa       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           48 seconds ago      Running             kube-proxy                  0                   1eb5f95782519       kube-proxy-gfbw9                             kube-system
	dfe482616e842       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           52 seconds ago      Running             kube-scheduler              0                   24aadec124b44       kube-scheduler-embed-certs-542273            kube-system
	66e8eb832ab4f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   b9b1d7898d938       etcd-embed-certs-542273                      kube-system
	a0c2e00338830       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           52 seconds ago      Running             kube-apiserver              0                   f4a0d70ee0486       kube-apiserver-embed-certs-542273            kube-system
	519a5111a4600       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           52 seconds ago      Running             kube-controller-manager     0                   355dcfce4a815       kube-controller-manager-embed-certs-542273   kube-system
	
	
	==> coredns [4273013d360ec8c8e165713eb420e127b9ac50d03a71760a379b7d109d56ca70] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35244 - 40574 "HINFO IN 7941805140947108205.201617975667839127. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026681507s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-542273
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-542273
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=embed-certs-542273
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_54_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:54:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-542273
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:55:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:55:45 +0000   Wed, 17 Dec 2025 11:53:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:55:45 +0000   Wed, 17 Dec 2025 11:53:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:55:45 +0000   Wed, 17 Dec 2025 11:53:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:55:45 +0000   Wed, 17 Dec 2025 11:54:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-542273
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                9ff27ec3-7f97-49af-87a4-abbb0c483315
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-t66bd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-542273                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-lvlhs                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-542273             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-542273    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-gfbw9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-542273             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-p46mz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4l444         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node embed-certs-542273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node embed-certs-542273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node embed-certs-542273 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node embed-certs-542273 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node embed-certs-542273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node embed-certs-542273 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node embed-certs-542273 event: Registered Node embed-certs-542273 in Controller
	  Normal  NodeReady                93s                  kubelet          Node embed-certs-542273 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 53s)    kubelet          Node embed-certs-542273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 53s)    kubelet          Node embed-certs-542273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 53s)    kubelet          Node embed-certs-542273 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                  node-controller  Node embed-certs-542273 event: Registered Node embed-certs-542273 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [66e8eb832ab4f5366549961c0b2bb218b272bf70168a1b853d7a1ea9895c604d] <==
	{"level":"warn","ts":"2025-12-17T11:55:04.446519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.457796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.467337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.477691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.486290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.495433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.503508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.512100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.520968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.529783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.538751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.549099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.559831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.568364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.579078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.589443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.600076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.610347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.620352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.631269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.642007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.658448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.670067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.680461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.741479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:55:54 up  5:38,  0 user,  load average: 5.37, 4.13, 2.59
	Linux embed-certs-542273 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c4da15d668f5e0c2ba173770df24ded1614df1b9ae6d62a4056fbf6f97e50172] <==
	I1217 11:55:06.580746       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:55:06.581038       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 11:55:06.581230       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:55:06.581260       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:55:06.581274       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:55:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:55:06.787294       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:55:06.787383       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:55:06.787423       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:55:06.787584       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:55:07.088618       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:55:07.088667       1 metrics.go:72] Registering metrics
	I1217 11:55:07.088739       1 controller.go:711] "Syncing nftables rules"
	I1217 11:55:16.786665       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:55:16.786728       1 main.go:301] handling current node
	I1217 11:55:26.785331       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:55:26.785397       1 main.go:301] handling current node
	I1217 11:55:36.785295       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:55:36.785341       1 main.go:301] handling current node
	I1217 11:55:46.785723       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:55:46.785772       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a0c2e003388306e1709cba308307c9c32f132cc9f51622dfcf37e31be663ef38] <==
	I1217 11:55:05.370692       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 11:55:05.370855       1 aggregator.go:171] initial CRD sync complete...
	I1217 11:55:05.370875       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 11:55:05.370883       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:55:05.370890       1 cache.go:39] Caches are synced for autoregister controller
	I1217 11:55:05.371685       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 11:55:05.371826       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 11:55:05.372250       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 11:55:05.372295       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 11:55:05.379382       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 11:55:05.379522       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 11:55:05.416036       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:55:05.452744       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 11:55:05.459298       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:55:05.844940       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:55:05.891601       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:55:05.922992       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:55:05.931621       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:55:05.944390       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:55:06.021894       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.55.66"}
	I1217 11:55:06.039043       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.207.65"}
	I1217 11:55:06.288781       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:55:08.719699       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:55:09.068204       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:55:09.318389       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [519a5111a4600b89107c3202de3f67b9bc492c3b2f1e0cd7846625b575c28310] <==
	I1217 11:55:08.710568       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 11:55:08.712827       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 11:55:08.714128       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 11:55:08.714139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:55:08.714150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 11:55:08.714158       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 11:55:08.714180       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 11:55:08.714450       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 11:55:08.714461       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 11:55:08.715517       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 11:55:08.715961       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 11:55:08.716406       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 11:55:08.718763       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 11:55:08.721046       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 11:55:08.721103       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:55:08.721126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 11:55:08.721152       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 11:55:08.721195       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 11:55:08.721224       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 11:55:08.721230       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 11:55:08.721235       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 11:55:08.722351       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 11:55:08.722381       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 11:55:08.722448       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:55:08.737267       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f398beed018faf9bbc2e0cce3ebe9161b6148e792e45e5cf0f77341e02476b82] <==
	I1217 11:55:06.350772       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:55:06.416424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:55:06.517250       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:55:06.517304       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 11:55:06.517422       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:55:06.544012       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:55:06.544108       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:55:06.553282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:55:06.553756       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:55:06.553834       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:55:06.556544       1 config.go:200] "Starting service config controller"
	I1217 11:55:06.556829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:55:06.556723       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:55:06.556897       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:55:06.556734       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:55:06.557109       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:55:06.557172       1 config.go:309] "Starting node config controller"
	I1217 11:55:06.557186       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:55:06.657611       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:55:06.657625       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:55:06.657623       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:55:06.657640       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [dfe482616e84293a27eb3b23ada5a5a0ed3f7b9365e8582247b4ebc8ecd21761] <==
	I1217 11:55:02.640299       1 serving.go:386] Generated self-signed cert in-memory
	W1217 11:55:05.322861       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 11:55:05.322894       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1217 11:55:05.322908       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 11:55:05.322917       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 11:55:05.383841       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 11:55:05.383941       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:55:05.387387       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:55:05.387425       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:55:05.387820       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 11:55:05.387896       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 11:55:05.491698       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:55:09 embed-certs-542273 kubelet[767]: I1217 11:55:09.290173     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrv8b\" (UniqueName: \"kubernetes.io/projected/8c548eba-7519-4304-b5bd-06ec979e367c-kube-api-access-mrv8b\") pod \"dashboard-metrics-scraper-6ffb444bf9-p46mz\" (UID: \"8c548eba-7519-4304-b5bd-06ec979e367c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz"
	Dec 17 11:55:09 embed-certs-542273 kubelet[767]: I1217 11:55:09.290192     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4b5n\" (UniqueName: \"kubernetes.io/projected/2c145daa-4b13-4d9d-9c48-dac61c781395-kube-api-access-q4b5n\") pod \"kubernetes-dashboard-855c9754f9-4l444\" (UID: \"2c145daa-4b13-4d9d-9c48-dac61c781395\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l444"
	Dec 17 11:55:09 embed-certs-542273 kubelet[767]: I1217 11:55:09.290272     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2c145daa-4b13-4d9d-9c48-dac61c781395-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4l444\" (UID: \"2c145daa-4b13-4d9d-9c48-dac61c781395\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l444"
	Dec 17 11:55:12 embed-certs-542273 kubelet[767]: I1217 11:55:12.974771     767 scope.go:117] "RemoveContainer" containerID="c482a843d469df974fce1482f1eb117a31306ff79d7db121aa256020a25acece"
	Dec 17 11:55:13 embed-certs-542273 kubelet[767]: I1217 11:55:13.981849     767 scope.go:117] "RemoveContainer" containerID="c482a843d469df974fce1482f1eb117a31306ff79d7db121aa256020a25acece"
	Dec 17 11:55:13 embed-certs-542273 kubelet[767]: I1217 11:55:13.982179     767 scope.go:117] "RemoveContainer" containerID="d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0"
	Dec 17 11:55:13 embed-certs-542273 kubelet[767]: E1217 11:55:13.982349     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:14 embed-certs-542273 kubelet[767]: I1217 11:55:14.997380     767 scope.go:117] "RemoveContainer" containerID="d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0"
	Dec 17 11:55:15 embed-certs-542273 kubelet[767]: E1217 11:55:15.000181     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:16 embed-certs-542273 kubelet[767]: I1217 11:55:16.000610     767 scope.go:117] "RemoveContainer" containerID="d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0"
	Dec 17 11:55:16 embed-certs-542273 kubelet[767]: E1217 11:55:16.000830     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:18 embed-certs-542273 kubelet[767]: I1217 11:55:18.020684     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l444" podStartSLOduration=1.291323121 podStartE2EDuration="9.020646605s" podCreationTimestamp="2025-12-17 11:55:09 +0000 UTC" firstStartedPulling="2025-12-17 11:55:09.470302274 +0000 UTC m=+7.668882584" lastFinishedPulling="2025-12-17 11:55:17.199625771 +0000 UTC m=+15.398206068" observedRunningTime="2025-12-17 11:55:18.020505129 +0000 UTC m=+16.219085449" watchObservedRunningTime="2025-12-17 11:55:18.020646605 +0000 UTC m=+16.219226925"
	Dec 17 11:55:28 embed-certs-542273 kubelet[767]: I1217 11:55:28.896988     767 scope.go:117] "RemoveContainer" containerID="d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0"
	Dec 17 11:55:29 embed-certs-542273 kubelet[767]: I1217 11:55:29.040598     767 scope.go:117] "RemoveContainer" containerID="d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0"
	Dec 17 11:55:29 embed-certs-542273 kubelet[767]: I1217 11:55:29.040808     767 scope.go:117] "RemoveContainer" containerID="61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076"
	Dec 17 11:55:29 embed-certs-542273 kubelet[767]: E1217 11:55:29.041004     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:34 embed-certs-542273 kubelet[767]: I1217 11:55:34.775502     767 scope.go:117] "RemoveContainer" containerID="61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076"
	Dec 17 11:55:34 embed-certs-542273 kubelet[767]: E1217 11:55:34.775735     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:37 embed-certs-542273 kubelet[767]: I1217 11:55:37.065332     767 scope.go:117] "RemoveContainer" containerID="7d0e62b7ae832719e32aa2f113a172f5c8b5acb0f58b8130262b9b16ff577d71"
	Dec 17 11:55:46 embed-certs-542273 kubelet[767]: I1217 11:55:46.896367     767 scope.go:117] "RemoveContainer" containerID="61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076"
	Dec 17 11:55:46 embed-certs-542273 kubelet[767]: E1217 11:55:46.896608     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:51 embed-certs-542273 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:55:51 embed-certs-542273 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:55:51 embed-certs-542273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:51 embed-certs-542273 systemd[1]: kubelet.service: Consumed 1.711s CPU time.
	
	
	==> kubernetes-dashboard [24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70] <==
	2025/12/17 11:55:17 Starting overwatch
	2025/12/17 11:55:17 Using namespace: kubernetes-dashboard
	2025/12/17 11:55:17 Using in-cluster config to connect to apiserver
	2025/12/17 11:55:17 Using secret token for csrf signing
	2025/12/17 11:55:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 11:55:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 11:55:17 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 11:55:17 Generating JWE encryption key
	2025/12/17 11:55:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 11:55:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 11:55:17 Initializing JWE encryption key from synchronized object
	2025/12/17 11:55:17 Creating in-cluster Sidecar client
	2025/12/17 11:55:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 11:55:17 Serving insecurely on HTTP port: 9090
	2025/12/17 11:55:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57] <==
	I1217 11:55:37.110631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:55:37.117985       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:55:37.118025       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 11:55:37.120129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:40.575513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:44.835819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:48.434008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:51.488740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:54.512086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:54.518341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:55:54.518526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:55:54.518788       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8cbc9e0-f980-443c-9469-43664e3fa9a6", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-542273_0a39d567-4bdf-4b83-84b5-75ab8969974b became leader
	I1217 11:55:54.518843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-542273_0a39d567-4bdf-4b83-84b5-75ab8969974b!
	W1217 11:55:54.522222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:54.529953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:55:54.619096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-542273_0a39d567-4bdf-4b83-84b5-75ab8969974b!
	
	
	==> storage-provisioner [7d0e62b7ae832719e32aa2f113a172f5c8b5acb0f58b8130262b9b16ff577d71] <==
	I1217 11:55:06.287029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 11:55:36.298822       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-542273 -n embed-certs-542273
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-542273 -n embed-certs-542273: exit status 2 (385.803336ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-542273 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-542273
helpers_test.go:244: (dbg) docker inspect embed-certs-542273:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c",
	        "Created": "2025-12-17T11:53:42.422221245Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1968979,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:54:54.402301783Z",
	            "FinishedAt": "2025-12-17T11:54:53.21201382Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/hosts",
	        "LogPath": "/var/lib/docker/containers/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c/b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c-json.log",
	        "Name": "/embed-certs-542273",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-542273:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-542273",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1f11181a02bb30cb6af9c4f132087ccbf6e110c9fb2c0a10aee91b906a9420c",
	                "LowerDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c102ce28104ee581f6af0f2cf267dacc544c110adff62fcedea84076e9333490/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-542273",
	                "Source": "/var/lib/docker/volumes/embed-certs-542273/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-542273",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-542273",
	                "name.minikube.sigs.k8s.io": "embed-certs-542273",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0ec644fd21e5ff45e68d383a8ce3644af96c2fce65b0c252c0207c5d785e5334",
	            "SandboxKey": "/var/run/docker/netns/0ec644fd21e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34631"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34632"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34635"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34633"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34634"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-542273": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d402fb644edc9023d8248c192d3a2f7035874f1b3b272648cd1fc766ab85445",
	                    "EndpointID": "475b93239a6e00a94e48bd524dd2c61965174f3062c499a41e742e6ca705e136",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ba:f3:54:0a:c5:c6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-542273",
	                        "b1f11181a02b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-542273 -n embed-certs-542273
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-542273 -n embed-certs-542273: exit status 2 (393.60133ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-542273 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-542273 logs -n 25: (1.40309144s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ image   │ old-k8s-version-401285 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ pause   │ -p old-k8s-version-401285 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p old-k8s-version-401285                                                                                                                                                                                                                          │ old-k8s-version-401285       │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p kubernetes-upgrade-556754                                                                                                                                                                                                                       │ kubernetes-upgrade-556754    │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ delete  │ -p disable-driver-mounts-618082                                                                                                                                                                                                                    │ disable-driver-mounts-618082 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:53 UTC │
	│ start   │ -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │ 17 Dec 25 11:54 UTC │
	│ delete  │ -p stopped-upgrade-287611                                                                                                                                                                                                                          │ stopped-upgrade-287611       │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-737478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p no-preload-737478 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-601829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p newest-cni-601829 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-601829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-601829            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ addons  │ enable metrics-server -p embed-certs-542273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ stop    │ -p embed-certs-542273 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-737478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:54 UTC │
	│ start   │ -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-737478            │ jenkins │ v1.37.0 │ 17 Dec 25 11:54 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p kindnet-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                           │ kindnet-213935               │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo iptables -t nat -L -n -v                                                                                                                                                                                                       │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:55:55
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:55:55.813967 1981818 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:55:55.814268 1981818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:55.814283 1981818 out.go:374] Setting ErrFile to fd 2...
	I1217 11:55:55.814288 1981818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:55.814587 1981818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:55:55.815149 1981818 out.go:368] Setting JSON to false
	I1217 11:55:55.816869 1981818 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20301,"bootTime":1765952255,"procs":441,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:55:55.816945 1981818 start.go:143] virtualization: kvm guest
	I1217 11:55:55.819115 1981818 out.go:179] * [kindnet-213935] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:55:55.821069 1981818 notify.go:221] Checking for updates...
	I1217 11:55:55.821089 1981818 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:55:55.822543 1981818 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:55:55.823713 1981818 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:55:55.824940 1981818 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:55:55.826229 1981818 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:55:55.827460 1981818 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1217 11:55:50.996455 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	W1217 11:55:52.999642 1972864 pod_ready.go:104] pod "coredns-66bc5c9577-8nz5c" is not "Ready", error: <nil>
	I1217 11:55:55.497220 1972864 pod_ready.go:94] pod "coredns-66bc5c9577-8nz5c" is "Ready"
	I1217 11:55:55.497248 1972864 pod_ready.go:86] duration metric: took 37.506777017s for pod "coredns-66bc5c9577-8nz5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:55.500089 1972864 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-382022" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:55.504700 1972864 pod_ready.go:94] pod "etcd-default-k8s-diff-port-382022" is "Ready"
	I1217 11:55:55.504724 1972864 pod_ready.go:86] duration metric: took 4.606031ms for pod "etcd-default-k8s-diff-port-382022" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:55.507324 1972864 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-382022" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:55.511952 1972864 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-382022" is "Ready"
	I1217 11:55:55.511989 1972864 pod_ready.go:86] duration metric: took 4.639375ms for pod "kube-apiserver-default-k8s-diff-port-382022" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:55.514262 1972864 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-382022" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:55.693947 1972864 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-382022" is "Ready"
	I1217 11:55:55.693984 1972864 pod_ready.go:86] duration metric: took 179.696829ms for pod "kube-controller-manager-default-k8s-diff-port-382022" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:55.895298 1972864 pod_ready.go:83] waiting for pod "kube-proxy-ss2p8" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:55:55.829144 1981818 config.go:182] Loaded profile config "auto-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:55.829252 1981818 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:55.829351 1981818 config.go:182] Loaded profile config "embed-certs-542273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:55:55.829478 1981818 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:55:55.858514 1981818 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:55:55.858661 1981818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:55.939980 1981818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-17 11:55:55.927582105 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:55:55.940105 1981818 docker.go:319] overlay module found
	I1217 11:55:55.942198 1981818 out.go:179] * Using the docker driver based on user configuration
	I1217 11:55:55.943641 1981818 start.go:309] selected driver: docker
	I1217 11:55:55.943661 1981818 start.go:927] validating driver "docker" against <nil>
	I1217 11:55:55.943675 1981818 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:55:55.944442 1981818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:56.010581 1981818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 11:55:55.99929108 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:55:56.010831 1981818 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:55:56.011109 1981818 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:56.013197 1981818 out.go:179] * Using Docker driver with root privileges
	I1217 11:55:56.014574 1981818 cni.go:84] Creating CNI manager for "kindnet"
	I1217 11:55:56.014602 1981818 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:55:56.014674 1981818 start.go:353] cluster config:
	{Name:kindnet-213935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-213935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:56.016260 1981818 out.go:179] * Starting "kindnet-213935" primary control-plane node in "kindnet-213935" cluster
	I1217 11:55:56.017435 1981818 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:55:56.018816 1981818 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:55:56.019975 1981818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:55:56.020027 1981818 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:55:56.020038 1981818 cache.go:65] Caching tarball of preloaded images
	I1217 11:55:56.020065 1981818 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:55:56.020210 1981818 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:55:56.020228 1981818 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 11:55:56.020352 1981818 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/config.json ...
	I1217 11:55:56.020378 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/config.json: {Name:mk6e15e9e5bef48f45f5829388d163b5419d6aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:55:56.046179 1981818 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:55:56.046201 1981818 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:55:56.046221 1981818 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:55:56.046259 1981818 start.go:360] acquireMachinesLock for kindnet-213935: {Name:mk9f9edee42385cc71c92fc28b8436de197f5e47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:56.046394 1981818 start.go:364] duration metric: took 111.012µs to acquireMachinesLock for "kindnet-213935"
	I1217 11:55:56.046432 1981818 start.go:93] Provisioning new machine with config: &{Name:kindnet-213935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-213935 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:55:56.046558 1981818 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 17 11:55:17 embed-certs-542273 crio[601]: time="2025-12-17T11:55:17.246021307Z" level=info msg="Created container 24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l444/kubernetes-dashboard" id=c0285e8d-a097-4488-81b9-c084c63574ec name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:17 embed-certs-542273 crio[601]: time="2025-12-17T11:55:17.247841858Z" level=info msg="Starting container: 24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70" id=75cea8da-54ed-4d5e-8568-fb6c50146349 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:17 embed-certs-542273 crio[601]: time="2025-12-17T11:55:17.250812923Z" level=info msg="Started container" PID=1756 containerID=24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l444/kubernetes-dashboard id=75cea8da-54ed-4d5e-8568-fb6c50146349 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6210742959cf6f6a00f18dc08c3e5b474e175e8146ef96872a1d45e58e75d606
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.897695627Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a47be1bb-8d7e-42ef-ae3d-91269277d8ea name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.900328259Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ae405c15-058f-4d00-a244-a38ecb9c6b79 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.90505667Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz/dashboard-metrics-scraper" id=7b6d7abc-33ac-4d9e-a41a-8f32a4cd8fe0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.905212824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.911579965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.91209142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.942351784Z" level=info msg="Created container 61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz/dashboard-metrics-scraper" id=7b6d7abc-33ac-4d9e-a41a-8f32a4cd8fe0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.943053945Z" level=info msg="Starting container: 61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076" id=8338d3a0-7d88-45b8-b132-959ee36d40ec name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:28 embed-certs-542273 crio[601]: time="2025-12-17T11:55:28.944869266Z" level=info msg="Started container" PID=1775 containerID=61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz/dashboard-metrics-scraper id=8338d3a0-7d88-45b8-b132-959ee36d40ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=a0501bf3a2b4dbb422394a128586713b05e755ada0cf505ccea7b6e06fa3c11f
	Dec 17 11:55:29 embed-certs-542273 crio[601]: time="2025-12-17T11:55:29.041927603Z" level=info msg="Removing container: d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0" id=af1d3ec2-64e5-4a9e-a888-46badc9cf197 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:29 embed-certs-542273 crio[601]: time="2025-12-17T11:55:29.051504882Z" level=info msg="Removed container d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz/dashboard-metrics-scraper" id=af1d3ec2-64e5-4a9e-a888-46badc9cf197 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.065892334Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ddfae3d-4b0a-4e4e-804b-2466efc35121 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.066987891Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=87b12512-bc35-4a31-abd9-0e87bb3b91bf name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.068120721Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d858a023-73d8-4b27-8ff5-b42c78d4b30d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.06826427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.07273472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.072928269Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/04476c09502c536d63da40167b7d751489f6cb6e699806e46412f20c5c826884/merged/etc/passwd: no such file or directory"
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.072964527Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/04476c09502c536d63da40167b7d751489f6cb6e699806e46412f20c5c826884/merged/etc/group: no such file or directory"
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.073274502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.095810835Z" level=info msg="Created container 1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57: kube-system/storage-provisioner/storage-provisioner" id=d858a023-73d8-4b27-8ff5-b42c78d4b30d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.096434127Z" level=info msg="Starting container: 1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57" id=f24bc66f-43ad-4033-b73c-a5f1b572b7eb name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:37 embed-certs-542273 crio[601]: time="2025-12-17T11:55:37.098362665Z" level=info msg="Started container" PID=1789 containerID=1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57 description=kube-system/storage-provisioner/storage-provisioner id=f24bc66f-43ad-4033-b73c-a5f1b572b7eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=bce4bcd96368206ea7c45472b6abcafba2d66a593a0139960d138b31db85d686
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1373da4a0aa89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   bce4bcd963682       storage-provisioner                          kube-system
	61d34e9aca683       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   a0501bf3a2b4d       dashboard-metrics-scraper-6ffb444bf9-p46mz   kubernetes-dashboard
	24be3a7f600c5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   6210742959cf6       kubernetes-dashboard-855c9754f9-4l444        kubernetes-dashboard
	a64004b155ed2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   1880c996f5aae       busybox                                      default
	4273013d360ec       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   46643ebca44ab       coredns-66bc5c9577-t66bd                     kube-system
	c4da15d668f5e       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 0                   9758ecad63387       kindnet-lvlhs                                kube-system
	7d0e62b7ae832       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   bce4bcd963682       storage-provisioner                          kube-system
	f398beed018fa       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           50 seconds ago      Running             kube-proxy                  0                   1eb5f95782519       kube-proxy-gfbw9                             kube-system
	dfe482616e842       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           54 seconds ago      Running             kube-scheduler              0                   24aadec124b44       kube-scheduler-embed-certs-542273            kube-system
	66e8eb832ab4f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   b9b1d7898d938       etcd-embed-certs-542273                      kube-system
	a0c2e00338830       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           54 seconds ago      Running             kube-apiserver              0                   f4a0d70ee0486       kube-apiserver-embed-certs-542273            kube-system
	519a5111a4600       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           54 seconds ago      Running             kube-controller-manager     0                   355dcfce4a815       kube-controller-manager-embed-certs-542273   kube-system
	
	
	==> coredns [4273013d360ec8c8e165713eb420e127b9ac50d03a71760a379b7d109d56ca70] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35244 - 40574 "HINFO IN 7941805140947108205.201617975667839127. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026681507s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-542273
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-542273
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=embed-certs-542273
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_54_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:54:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-542273
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:55:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:55:45 +0000   Wed, 17 Dec 2025 11:53:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:55:45 +0000   Wed, 17 Dec 2025 11:53:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:55:45 +0000   Wed, 17 Dec 2025 11:53:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:55:45 +0000   Wed, 17 Dec 2025 11:54:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-542273
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                9ff27ec3-7f97-49af-87a4-abbb0c483315
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-t66bd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-542273                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-lvlhs                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-542273             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-542273    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-gfbw9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-542273             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-p46mz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4l444         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node embed-certs-542273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node embed-certs-542273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node embed-certs-542273 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     114s               kubelet          Node embed-certs-542273 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node embed-certs-542273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node embed-certs-542273 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node embed-certs-542273 event: Registered Node embed-certs-542273 in Controller
	  Normal  NodeReady                95s                kubelet          Node embed-certs-542273 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)  kubelet          Node embed-certs-542273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)  kubelet          Node embed-certs-542273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)  kubelet          Node embed-certs-542273 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node embed-certs-542273 event: Registered Node embed-certs-542273 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [66e8eb832ab4f5366549961c0b2bb218b272bf70168a1b853d7a1ea9895c604d] <==
	{"level":"warn","ts":"2025-12-17T11:55:04.446519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.457796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.467337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.477691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.486290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.495433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.503508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.512100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.520968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.529783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.538751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.549099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.559831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.568364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.579078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.589443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.600076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.610347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.620352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.631269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.642007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.658448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.670067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.680461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:04.741479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:55:57 up  5:38,  0 user,  load average: 5.34, 4.14, 2.61
	Linux embed-certs-542273 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c4da15d668f5e0c2ba173770df24ded1614df1b9ae6d62a4056fbf6f97e50172] <==
	I1217 11:55:06.580746       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:55:06.581038       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 11:55:06.581230       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:55:06.581260       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:55:06.581274       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:55:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:55:06.787294       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:55:06.787383       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:55:06.787423       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:55:06.787584       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:55:07.088618       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:55:07.088667       1 metrics.go:72] Registering metrics
	I1217 11:55:07.088739       1 controller.go:711] "Syncing nftables rules"
	I1217 11:55:16.786665       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:55:16.786728       1 main.go:301] handling current node
	I1217 11:55:26.785331       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:55:26.785397       1 main.go:301] handling current node
	I1217 11:55:36.785295       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:55:36.785341       1 main.go:301] handling current node
	I1217 11:55:46.785723       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:55:46.785772       1 main.go:301] handling current node
	I1217 11:55:56.794683       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 11:55:56.794723       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a0c2e003388306e1709cba308307c9c32f132cc9f51622dfcf37e31be663ef38] <==
	I1217 11:55:05.370692       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 11:55:05.370855       1 aggregator.go:171] initial CRD sync complete...
	I1217 11:55:05.370875       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 11:55:05.370883       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:55:05.370890       1 cache.go:39] Caches are synced for autoregister controller
	I1217 11:55:05.371685       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 11:55:05.371826       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 11:55:05.372250       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 11:55:05.372295       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 11:55:05.379382       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 11:55:05.379522       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 11:55:05.416036       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:55:05.452744       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 11:55:05.459298       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:55:05.844940       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:55:05.891601       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:55:05.922992       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:55:05.931621       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:55:05.944390       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:55:06.021894       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.55.66"}
	I1217 11:55:06.039043       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.207.65"}
	I1217 11:55:06.288781       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:55:08.719699       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:55:09.068204       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:55:09.318389       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [519a5111a4600b89107c3202de3f67b9bc492c3b2f1e0cd7846625b575c28310] <==
	I1217 11:55:08.710568       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 11:55:08.712827       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 11:55:08.714128       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 11:55:08.714139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:55:08.714150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 11:55:08.714158       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 11:55:08.714180       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 11:55:08.714450       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 11:55:08.714461       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 11:55:08.715517       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 11:55:08.715961       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 11:55:08.716406       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 11:55:08.718763       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 11:55:08.721046       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 11:55:08.721103       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:55:08.721126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 11:55:08.721152       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 11:55:08.721195       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 11:55:08.721224       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 11:55:08.721230       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 11:55:08.721235       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 11:55:08.722351       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 11:55:08.722381       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 11:55:08.722448       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:55:08.737267       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f398beed018faf9bbc2e0cce3ebe9161b6148e792e45e5cf0f77341e02476b82] <==
	I1217 11:55:06.350772       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:55:06.416424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:55:06.517250       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:55:06.517304       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 11:55:06.517422       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:55:06.544012       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:55:06.544108       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:55:06.553282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:55:06.553756       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:55:06.553834       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:55:06.556544       1 config.go:200] "Starting service config controller"
	I1217 11:55:06.556829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:55:06.556723       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:55:06.556897       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:55:06.556734       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:55:06.557109       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:55:06.557172       1 config.go:309] "Starting node config controller"
	I1217 11:55:06.557186       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:55:06.657611       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:55:06.657625       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:55:06.657623       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:55:06.657640       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [dfe482616e84293a27eb3b23ada5a5a0ed3f7b9365e8582247b4ebc8ecd21761] <==
	I1217 11:55:02.640299       1 serving.go:386] Generated self-signed cert in-memory
	W1217 11:55:05.322861       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 11:55:05.322894       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1217 11:55:05.322908       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 11:55:05.322917       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 11:55:05.383841       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 11:55:05.383941       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:55:05.387387       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:55:05.387425       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:55:05.387820       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 11:55:05.387896       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 11:55:05.491698       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:55:09 embed-certs-542273 kubelet[767]: I1217 11:55:09.290173     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrv8b\" (UniqueName: \"kubernetes.io/projected/8c548eba-7519-4304-b5bd-06ec979e367c-kube-api-access-mrv8b\") pod \"dashboard-metrics-scraper-6ffb444bf9-p46mz\" (UID: \"8c548eba-7519-4304-b5bd-06ec979e367c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz"
	Dec 17 11:55:09 embed-certs-542273 kubelet[767]: I1217 11:55:09.290192     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4b5n\" (UniqueName: \"kubernetes.io/projected/2c145daa-4b13-4d9d-9c48-dac61c781395-kube-api-access-q4b5n\") pod \"kubernetes-dashboard-855c9754f9-4l444\" (UID: \"2c145daa-4b13-4d9d-9c48-dac61c781395\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l444"
	Dec 17 11:55:09 embed-certs-542273 kubelet[767]: I1217 11:55:09.290272     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2c145daa-4b13-4d9d-9c48-dac61c781395-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4l444\" (UID: \"2c145daa-4b13-4d9d-9c48-dac61c781395\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l444"
	Dec 17 11:55:12 embed-certs-542273 kubelet[767]: I1217 11:55:12.974771     767 scope.go:117] "RemoveContainer" containerID="c482a843d469df974fce1482f1eb117a31306ff79d7db121aa256020a25acece"
	Dec 17 11:55:13 embed-certs-542273 kubelet[767]: I1217 11:55:13.981849     767 scope.go:117] "RemoveContainer" containerID="c482a843d469df974fce1482f1eb117a31306ff79d7db121aa256020a25acece"
	Dec 17 11:55:13 embed-certs-542273 kubelet[767]: I1217 11:55:13.982179     767 scope.go:117] "RemoveContainer" containerID="d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0"
	Dec 17 11:55:13 embed-certs-542273 kubelet[767]: E1217 11:55:13.982349     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:14 embed-certs-542273 kubelet[767]: I1217 11:55:14.997380     767 scope.go:117] "RemoveContainer" containerID="d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0"
	Dec 17 11:55:15 embed-certs-542273 kubelet[767]: E1217 11:55:15.000181     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:16 embed-certs-542273 kubelet[767]: I1217 11:55:16.000610     767 scope.go:117] "RemoveContainer" containerID="d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0"
	Dec 17 11:55:16 embed-certs-542273 kubelet[767]: E1217 11:55:16.000830     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:18 embed-certs-542273 kubelet[767]: I1217 11:55:18.020684     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l444" podStartSLOduration=1.291323121 podStartE2EDuration="9.020646605s" podCreationTimestamp="2025-12-17 11:55:09 +0000 UTC" firstStartedPulling="2025-12-17 11:55:09.470302274 +0000 UTC m=+7.668882584" lastFinishedPulling="2025-12-17 11:55:17.199625771 +0000 UTC m=+15.398206068" observedRunningTime="2025-12-17 11:55:18.020505129 +0000 UTC m=+16.219085449" watchObservedRunningTime="2025-12-17 11:55:18.020646605 +0000 UTC m=+16.219226925"
	Dec 17 11:55:28 embed-certs-542273 kubelet[767]: I1217 11:55:28.896988     767 scope.go:117] "RemoveContainer" containerID="d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0"
	Dec 17 11:55:29 embed-certs-542273 kubelet[767]: I1217 11:55:29.040598     767 scope.go:117] "RemoveContainer" containerID="d16431916a25bba98b0366ae1576a0649ae0bb6e5d60e49d1ddbb6f4e06262d0"
	Dec 17 11:55:29 embed-certs-542273 kubelet[767]: I1217 11:55:29.040808     767 scope.go:117] "RemoveContainer" containerID="61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076"
	Dec 17 11:55:29 embed-certs-542273 kubelet[767]: E1217 11:55:29.041004     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:34 embed-certs-542273 kubelet[767]: I1217 11:55:34.775502     767 scope.go:117] "RemoveContainer" containerID="61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076"
	Dec 17 11:55:34 embed-certs-542273 kubelet[767]: E1217 11:55:34.775735     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:37 embed-certs-542273 kubelet[767]: I1217 11:55:37.065332     767 scope.go:117] "RemoveContainer" containerID="7d0e62b7ae832719e32aa2f113a172f5c8b5acb0f58b8130262b9b16ff577d71"
	Dec 17 11:55:46 embed-certs-542273 kubelet[767]: I1217 11:55:46.896367     767 scope.go:117] "RemoveContainer" containerID="61d34e9aca6839ad01e70f4d31e5cd338ab122c1c147357895625bcd8d2ca076"
	Dec 17 11:55:46 embed-certs-542273 kubelet[767]: E1217 11:55:46.896608     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p46mz_kubernetes-dashboard(8c548eba-7519-4304-b5bd-06ec979e367c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p46mz" podUID="8c548eba-7519-4304-b5bd-06ec979e367c"
	Dec 17 11:55:51 embed-certs-542273 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:55:51 embed-certs-542273 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:55:51 embed-certs-542273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:51 embed-certs-542273 systemd[1]: kubelet.service: Consumed 1.711s CPU time.
	
	
	==> kubernetes-dashboard [24be3a7f600c5eae389075f13d53d908070e9041941293b5181d09175d5fcd70] <==
	2025/12/17 11:55:17 Starting overwatch
	2025/12/17 11:55:17 Using namespace: kubernetes-dashboard
	2025/12/17 11:55:17 Using in-cluster config to connect to apiserver
	2025/12/17 11:55:17 Using secret token for csrf signing
	2025/12/17 11:55:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 11:55:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 11:55:17 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 11:55:17 Generating JWE encryption key
	2025/12/17 11:55:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 11:55:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 11:55:17 Initializing JWE encryption key from synchronized object
	2025/12/17 11:55:17 Creating in-cluster Sidecar client
	2025/12/17 11:55:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 11:55:17 Serving insecurely on HTTP port: 9090
	2025/12/17 11:55:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1373da4a0aa898b78de321ff68679a4861b2fb0577f689b887414901e57ffe57] <==
	I1217 11:55:37.110631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:55:37.117985       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:55:37.118025       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 11:55:37.120129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:40.575513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:44.835819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:48.434008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:51.488740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:54.512086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:54.518341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:55:54.518526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:55:54.518788       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8cbc9e0-f980-443c-9469-43664e3fa9a6", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-542273_0a39d567-4bdf-4b83-84b5-75ab8969974b became leader
	I1217 11:55:54.518843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-542273_0a39d567-4bdf-4b83-84b5-75ab8969974b!
	W1217 11:55:54.522222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:54.529953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:55:54.619096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-542273_0a39d567-4bdf-4b83-84b5-75ab8969974b!
	W1217 11:55:56.534888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:56.544656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7d0e62b7ae832719e32aa2f113a172f5c8b5acb0f58b8130262b9b16ff577d71] <==
	I1217 11:55:06.287029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 11:55:36.298822       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-542273 -n embed-certs-542273
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-542273 -n embed-certs-542273: exit status 2 (362.705011ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-542273 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-382022 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-382022 --alsologtostderr -v=1: exit status 80 (1.986448987s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-382022 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:56:08.880934 1989527 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:56:08.881295 1989527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:56:08.881312 1989527 out.go:374] Setting ErrFile to fd 2...
	I1217 11:56:08.881320 1989527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:56:08.881751 1989527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:56:08.882114 1989527 out.go:368] Setting JSON to false
	I1217 11:56:08.882151 1989527 mustload.go:66] Loading cluster: default-k8s-diff-port-382022
	I1217 11:56:08.882848 1989527 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:56:08.883320 1989527 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-382022 --format={{.State.Status}}
	I1217 11:56:08.907448 1989527 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:56:08.907875 1989527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:56:08.982444 1989527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:80 SystemTime:2025-12-17 11:56:08.969736797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:56:08.983337 1989527 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-382022 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 11:56:08.985365 1989527 out.go:179] * Pausing node default-k8s-diff-port-382022 ... 
	I1217 11:56:08.986655 1989527 host.go:66] Checking if "default-k8s-diff-port-382022" exists ...
	I1217 11:56:08.987029 1989527 ssh_runner.go:195] Run: systemctl --version
	I1217 11:56:08.987085 1989527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-382022
	I1217 11:56:09.009936 1989527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34641 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/default-k8s-diff-port-382022/id_rsa Username:docker}
	I1217 11:56:09.107562 1989527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:56:09.131938 1989527 pause.go:52] kubelet running: true
	I1217 11:56:09.132007 1989527 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:56:09.354577 1989527 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:56:09.354722 1989527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:56:09.439188 1989527 cri.go:89] found id: "f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458"
	I1217 11:56:09.439213 1989527 cri.go:89] found id: "6f1603ab1d2f4c5f3c89f50a948af12dfbdf8479947f920fb97b296d0b0332fb"
	I1217 11:56:09.439220 1989527 cri.go:89] found id: "9089887de0862d0ff0a1ff8947345dc29948ae2624d480753c53732800ea3d73"
	I1217 11:56:09.439225 1989527 cri.go:89] found id: "ca90c0ad17baa36018f17a949c22eb29104e7ffb781b57997d12ed329cb8c977"
	I1217 11:56:09.439231 1989527 cri.go:89] found id: "f557936eef47dbdcb08b9026a7f1a1443df08e5d35b1c935cf63462239b38e6e"
	I1217 11:56:09.439236 1989527 cri.go:89] found id: "8a177f28a91aaa2beb33f612bda7e08cb55f517dc85cb28db4600fd97f28c910"
	I1217 11:56:09.439241 1989527 cri.go:89] found id: "b89ae3816c4a84a75d80384f2ac0ba58aaba5961009d2b0e4689a33fd8bee8c7"
	I1217 11:56:09.439246 1989527 cri.go:89] found id: "7b920b07dddb55c17343ecbdc9f777396c3b3e9c983a17164746d7f9865e23b0"
	I1217 11:56:09.439251 1989527 cri.go:89] found id: "6133fb2263ed69eedfc718e57501b70033d65802ca78d796131ff5830a512466"
	I1217 11:56:09.439272 1989527 cri.go:89] found id: "ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72"
	I1217 11:56:09.439281 1989527 cri.go:89] found id: "d724e8e3fe1b95f0c6f317a1032d94dbd2ab888e247b9ebaabf3b73d221d53a0"
	I1217 11:56:09.439287 1989527 cri.go:89] found id: ""
	I1217 11:56:09.439336 1989527 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:56:09.454636 1989527 retry.go:31] will retry after 204.970774ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:56:09Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:56:09.660127 1989527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:56:09.675247 1989527 pause.go:52] kubelet running: false
	I1217 11:56:09.675305 1989527 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:56:09.844457 1989527 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:56:09.844575 1989527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:56:09.931592 1989527 cri.go:89] found id: "f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458"
	I1217 11:56:09.931613 1989527 cri.go:89] found id: "6f1603ab1d2f4c5f3c89f50a948af12dfbdf8479947f920fb97b296d0b0332fb"
	I1217 11:56:09.931617 1989527 cri.go:89] found id: "9089887de0862d0ff0a1ff8947345dc29948ae2624d480753c53732800ea3d73"
	I1217 11:56:09.931620 1989527 cri.go:89] found id: "ca90c0ad17baa36018f17a949c22eb29104e7ffb781b57997d12ed329cb8c977"
	I1217 11:56:09.931622 1989527 cri.go:89] found id: "f557936eef47dbdcb08b9026a7f1a1443df08e5d35b1c935cf63462239b38e6e"
	I1217 11:56:09.931626 1989527 cri.go:89] found id: "8a177f28a91aaa2beb33f612bda7e08cb55f517dc85cb28db4600fd97f28c910"
	I1217 11:56:09.931630 1989527 cri.go:89] found id: "b89ae3816c4a84a75d80384f2ac0ba58aaba5961009d2b0e4689a33fd8bee8c7"
	I1217 11:56:09.931635 1989527 cri.go:89] found id: "7b920b07dddb55c17343ecbdc9f777396c3b3e9c983a17164746d7f9865e23b0"
	I1217 11:56:09.931639 1989527 cri.go:89] found id: "6133fb2263ed69eedfc718e57501b70033d65802ca78d796131ff5830a512466"
	I1217 11:56:09.931648 1989527 cri.go:89] found id: "ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72"
	I1217 11:56:09.931661 1989527 cri.go:89] found id: "d724e8e3fe1b95f0c6f317a1032d94dbd2ab888e247b9ebaabf3b73d221d53a0"
	I1217 11:56:09.931666 1989527 cri.go:89] found id: ""
	I1217 11:56:09.931704 1989527 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:56:09.944756 1989527 retry.go:31] will retry after 514.483073ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:56:09Z" level=error msg="open /run/runc: no such file or directory"
	I1217 11:56:10.459506 1989527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:56:10.476349 1989527 pause.go:52] kubelet running: false
	I1217 11:56:10.476422 1989527 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 11:56:10.677832 1989527 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 11:56:10.677915 1989527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 11:56:10.760367 1989527 cri.go:89] found id: "f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458"
	I1217 11:56:10.760419 1989527 cri.go:89] found id: "6f1603ab1d2f4c5f3c89f50a948af12dfbdf8479947f920fb97b296d0b0332fb"
	I1217 11:56:10.760426 1989527 cri.go:89] found id: "9089887de0862d0ff0a1ff8947345dc29948ae2624d480753c53732800ea3d73"
	I1217 11:56:10.760431 1989527 cri.go:89] found id: "ca90c0ad17baa36018f17a949c22eb29104e7ffb781b57997d12ed329cb8c977"
	I1217 11:56:10.760436 1989527 cri.go:89] found id: "f557936eef47dbdcb08b9026a7f1a1443df08e5d35b1c935cf63462239b38e6e"
	I1217 11:56:10.760442 1989527 cri.go:89] found id: "8a177f28a91aaa2beb33f612bda7e08cb55f517dc85cb28db4600fd97f28c910"
	I1217 11:56:10.760447 1989527 cri.go:89] found id: "b89ae3816c4a84a75d80384f2ac0ba58aaba5961009d2b0e4689a33fd8bee8c7"
	I1217 11:56:10.760452 1989527 cri.go:89] found id: "7b920b07dddb55c17343ecbdc9f777396c3b3e9c983a17164746d7f9865e23b0"
	I1217 11:56:10.760457 1989527 cri.go:89] found id: "6133fb2263ed69eedfc718e57501b70033d65802ca78d796131ff5830a512466"
	I1217 11:56:10.760477 1989527 cri.go:89] found id: "ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72"
	I1217 11:56:10.760486 1989527 cri.go:89] found id: "d724e8e3fe1b95f0c6f317a1032d94dbd2ab888e247b9ebaabf3b73d221d53a0"
	I1217 11:56:10.760491 1989527 cri.go:89] found id: ""
	I1217 11:56:10.760782 1989527 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 11:56:10.776487 1989527 out.go:203] 
	W1217 11:56:10.778032 1989527 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:56:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:56:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 11:56:10.778056 1989527 out.go:285] * 
	* 
	W1217 11:56:10.789345 1989527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:56:10.792309 1989527 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-382022 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-382022
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-382022:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7",
	        "Created": "2025-12-17T11:54:00.607547087Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1973171,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:55:06.227939626Z",
	            "FinishedAt": "2025-12-17T11:55:04.594955106Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/hosts",
	        "LogPath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7-json.log",
	        "Name": "/default-k8s-diff-port-382022",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-382022:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-382022",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7",
	                "LowerDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-382022",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-382022/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-382022",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-382022",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-382022",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "81119c253dbfcbf9dcbebd996fe93966a0aef1a7787ac99e11a7739c898e271e",
	            "SandboxKey": "/var/run/docker/netns/81119c253dbf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34641"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34642"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34645"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34643"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34644"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-382022": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "009b4cca67d182f2097fba9336c46a1ff7237dab7ad046bb8a1746aae27ee661",
	                    "EndpointID": "8fb50baf9feb56e54ab2b731a7cd9dc097c73fe62d86f927b1bdd0a04f37c436",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "b2:e8:20:49:15:aa",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-382022",
	                        "4b7e99a28ab9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022: exit status 2 (387.553881ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-382022 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-382022 logs -n 25: (2.334658632s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p embed-certs-542273                                                                                                                                              │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo systemctl cat docker --no-pager                                                                                                                │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ ssh     │ -p auto-213935 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo docker system info                                                                                                                             │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │                     │
	│ ssh     │ -p auto-213935 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo cri-dockerd --version                                                                                                                          │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │                     │
	│ delete  │ -p embed-certs-542273                                                                                                                                              │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ start   │ -p calico-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-213935                │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │                     │
	│ ssh     │ -p auto-213935 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo containerd config dump                                                                                                                         │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo crio config                                                                                                                                    │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ delete  │ -p auto-213935                                                                                                                                                     │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ start   │ -p custom-flannel-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-213935        │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │                     │
	│ image   │ default-k8s-diff-port-382022 image list --format=json                                                                                                              │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ pause   │ -p default-k8s-diff-port-382022 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:56:08
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:56:08.344925 1988935 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:56:08.345075 1988935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:56:08.345089 1988935 out.go:374] Setting ErrFile to fd 2...
	I1217 11:56:08.345095 1988935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:56:08.345334 1988935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:56:08.346135 1988935 out.go:368] Setting JSON to false
	I1217 11:56:08.347903 1988935 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20313,"bootTime":1765952255,"procs":515,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:56:08.347968 1988935 start.go:143] virtualization: kvm guest
	I1217 11:56:08.351306 1988935 out.go:179] * [custom-flannel-213935] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:56:08.352805 1988935 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:56:08.352853 1988935 notify.go:221] Checking for updates...
	I1217 11:56:08.356427 1988935 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:56:08.358945 1988935 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:56:08.360191 1988935 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:56:08.361749 1988935 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:56:08.364870 1988935 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:56:08.366784 1988935 config.go:182] Loaded profile config "calico-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:56:08.366954 1988935 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:56:08.367081 1988935 config.go:182] Loaded profile config "kindnet-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:56:08.367210 1988935 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:56:08.399053 1988935 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:56:08.399203 1988935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:56:08.470586 1988935 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:56:08.458571306 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:56:08.470711 1988935 docker.go:319] overlay module found
	I1217 11:56:08.476396 1988935 out.go:179] * Using the docker driver based on user configuration
	I1217 11:56:08.478158 1988935 start.go:309] selected driver: docker
	I1217 11:56:08.478183 1988935 start.go:927] validating driver "docker" against <nil>
	I1217 11:56:08.478197 1988935 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:56:08.478915 1988935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:56:08.568928 1988935 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:56:08.554889898 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:56:08.569137 1988935 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:56:08.569452 1988935 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:56:08.573223 1988935 out.go:179] * Using Docker driver with root privileges
	I1217 11:56:08.576436 1988935 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1217 11:56:08.576473 1988935 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1217 11:56:08.576589 1988935 start.go:353] cluster config:
	{Name:custom-flannel-213935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-213935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:56:08.578253 1988935 out.go:179] * Starting "custom-flannel-213935" primary control-plane node in "custom-flannel-213935" cluster
	I1217 11:56:08.579626 1988935 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:56:08.581022 1988935 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:56:08.583121 1988935 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:56:08.583174 1988935 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:56:08.583190 1988935 cache.go:65] Caching tarball of preloaded images
	I1217 11:56:08.583244 1988935 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:56:08.583300 1988935 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:56:08.583312 1988935 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 11:56:08.583454 1988935 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/custom-flannel-213935/config.json ...
	I1217 11:56:08.583479 1988935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/custom-flannel-213935/config.json: {Name:mka72be7e9c041632dd273e52b418b695908ef27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:08.611327 1988935 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:56:08.611359 1988935 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:56:08.611380 1988935 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:56:08.611420 1988935 start.go:360] acquireMachinesLock for custom-flannel-213935: {Name:mk84d31db3628ebbaa1aa118c3375083ce996c9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:56:08.611578 1988935 start.go:364] duration metric: took 131.514µs to acquireMachinesLock for "custom-flannel-213935"
	I1217 11:56:08.611609 1988935 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-213935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-213935 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:56:08.611749 1988935 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:56:07.425632 1981818 cli_runner.go:164] Run: docker network inspect kindnet-213935 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:56:07.444550 1981818 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 11:56:07.449074 1981818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:56:07.459839 1981818 kubeadm.go:884] updating cluster {Name:kindnet-213935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-213935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:56:07.459950 1981818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:56:07.459993 1981818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:56:07.494271 1981818 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:56:07.494297 1981818 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:56:07.494359 1981818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:56:07.525323 1981818 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:56:07.525357 1981818 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:56:07.525367 1981818 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.3 crio true true} ...
	I1217 11:56:07.525486 1981818 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-213935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kindnet-213935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1217 11:56:07.525620 1981818 ssh_runner.go:195] Run: crio config
	I1217 11:56:07.582036 1981818 cni.go:84] Creating CNI manager for "kindnet"
	I1217 11:56:07.582066 1981818 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:56:07.582088 1981818 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-213935 NodeName:kindnet-213935 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:56:07.582254 1981818 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-213935"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:56:07.582378 1981818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:56:07.595388 1981818 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:56:07.595468 1981818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:56:07.605813 1981818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1217 11:56:07.619994 1981818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:56:07.644245 1981818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1217 11:56:07.659595 1981818 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:56:07.664027 1981818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:56:07.676581 1981818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:07.779015 1981818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:56:07.801458 1981818 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935 for IP: 192.168.103.2
	I1217 11:56:07.801483 1981818 certs.go:195] generating shared ca certs ...
	I1217 11:56:07.801506 1981818 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:07.801712 1981818 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:56:07.801757 1981818 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:56:07.801767 1981818 certs.go:257] generating profile certs ...
	I1217 11:56:07.801834 1981818 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.key
	I1217 11:56:07.801859 1981818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.crt with IP's: []
	I1217 11:56:07.935179 1981818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.crt ...
	I1217 11:56:07.935211 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.crt: {Name:mkddca328a9dee388f7d0ad7d7368d6f017e075b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:07.935393 1981818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.key ...
	I1217 11:56:07.935409 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.key: {Name:mk5e548c655398bb77a17227b59bb035919e8cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:07.935520 1981818 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key.39338b10
	I1217 11:56:07.935558 1981818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt.39338b10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 11:56:08.008227 1981818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt.39338b10 ...
	I1217 11:56:08.008254 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt.39338b10: {Name:mk0ef44f36c69551520f3343a7f6b87c746bdbd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:08.008431 1981818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key.39338b10 ...
	I1217 11:56:08.008449 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key.39338b10: {Name:mk5834436567a607e5721a8ff14466228ff67460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:08.008582 1981818 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt.39338b10 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt
	I1217 11:56:08.008722 1981818 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key.39338b10 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key
	I1217 11:56:08.008806 1981818 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.key
	I1217 11:56:08.008825 1981818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.crt with IP's: []
	I1217 11:56:08.035302 1981818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.crt ...
	I1217 11:56:08.035341 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.crt: {Name:mk79220f228113a7cf44063d666b3e0c7bd65f48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:08.035634 1981818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.key ...
	I1217 11:56:08.035671 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.key: {Name:mkdada61fd2bb1ac3ea77f9bf632ba8a3672f55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:08.035988 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:56:08.036084 1981818 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:56:08.036112 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:56:08.036170 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:56:08.036219 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:56:08.036272 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:56:08.036349 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:56:08.037382 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:56:08.073421 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:56:08.100428 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:56:08.141629 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:56:08.164403 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 11:56:08.188306 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 11:56:08.208451 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:56:08.230221 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:56:08.251204 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:56:08.271253 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:56:08.291962 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:56:08.312857 1981818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:56:08.327583 1981818 ssh_runner.go:195] Run: openssl version
	I1217 11:56:08.337057 1981818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:56:08.347349 1981818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:56:08.356625 1981818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:56:08.361092 1981818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:56:08.361150 1981818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:56:08.407263 1981818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:56:08.417377 1981818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:56:08.429990 1981818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:56:08.441809 1981818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:56:08.447702 1981818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:56:08.447768 1981818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:56:08.491430 1981818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:56:08.503757 1981818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:08.523126 1981818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:56:08.535081 1981818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:08.541519 1981818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:08.541646 1981818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:08.595217 1981818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:56:08.605744 1981818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:56:08.611428 1981818 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:56:08.611488 1981818 kubeadm.go:401] StartCluster: {Name:kindnet-213935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-213935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:56:08.611601 1981818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:56:08.611656 1981818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:56:08.646309 1981818 cri.go:89] found id: ""
	I1217 11:56:08.646385 1981818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:56:08.656208 1981818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:56:08.664858 1981818 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:56:08.664929 1981818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:56:08.674072 1981818 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:56:08.674091 1981818 kubeadm.go:158] found existing configuration files:
	
	I1217 11:56:08.674143 1981818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:56:08.683988 1981818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:56:08.684044 1981818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:56:08.693069 1981818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:56:08.702777 1981818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:56:08.702849 1981818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:56:08.713654 1981818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:56:08.723291 1981818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:56:08.723355 1981818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:56:08.732294 1981818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:56:08.742381 1981818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:56:08.742441 1981818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:56:08.751794 1981818 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:56:08.797495 1981818 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 11:56:08.797596 1981818 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:56:08.823947 1981818 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:56:08.824056 1981818 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:56:08.824118 1981818 kubeadm.go:319] OS: Linux
	I1217 11:56:08.824185 1981818 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:56:08.824249 1981818 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:56:08.824335 1981818 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:56:08.824419 1981818 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:56:08.824501 1981818 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:56:08.824613 1981818 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:56:08.824696 1981818 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:56:08.824765 1981818 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 11:56:08.904923 1981818 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:56:08.905070 1981818 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:56:08.905196 1981818 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:56:08.914156 1981818 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:56:08.916610 1981818 out.go:252]   - Generating certificates and keys ...
	I1217 11:56:08.916747 1981818 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:56:08.916863 1981818 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:56:09.280063 1981818 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:56:09.502377 1981818 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:56:10.363874 1981818 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:56:10.785724 1981818 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	
	
	==> CRI-O <==
	Dec 17 11:55:28 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:28.025401187Z" level=info msg="Started container" PID=1806 containerID=f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper id=b9c6a73b-0b7c-4be2-a85d-2f73d9744aad name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c51ceb2b1d32338f753599e86faa696c5640ab62ee64fff356d5cb58e3926cb
	Dec 17 11:55:28 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:28.9636027Z" level=info msg="Removing container: 4b75b93cd21131ff616eba2640311ae055f51347d8a1e1ffc92215e40c9b541d" id=a3f697c2-4e65-4659-8394-2c55838e736a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:28 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:28.974298735Z" level=info msg="Removed container 4b75b93cd21131ff616eba2640311ae055f51347d8a1e1ffc92215e40c9b541d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper" id=a3f697c2-4e65-4659-8394-2c55838e736a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.883978429Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7cb77da8-fa3d-417f-ba01-cdff6a9796f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.885038861Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=28af450d-1a1d-4854-ad5e-1f5066048947 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.886250164Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper" id=6448aa0d-fbd0-413f-9942-03968c39791d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.886482603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.892334969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.89302041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.927152222Z" level=info msg="Created container ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper" id=6448aa0d-fbd0-413f-9942-03968c39791d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.927819595Z" level=info msg="Starting container: ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72" id=d7755200-e3e1-4c38-8e97-1e3310fd0324 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.92952515Z" level=info msg="Started container" PID=1817 containerID=ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper id=d7755200-e3e1-4c38-8e97-1e3310fd0324 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c51ceb2b1d32338f753599e86faa696c5640ab62ee64fff356d5cb58e3926cb
	Dec 17 11:55:45 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:45.012000192Z" level=info msg="Removing container: f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb" id=eb03c949-7110-4eca-afa8-45f5c6c37f20 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:45 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:45.021879324Z" level=info msg="Removed container f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper" id=eb03c949-7110-4eca-afa8-45f5c6c37f20 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.023246378Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=129c6fca-719c-4ff8-bb7c-772fffb5b960 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.024377123Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3fa48443-ee7c-46aa-bbc0-ffb484d7de8d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.02549083Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e662c21b-e144-4be5-bb6a-8431bc71290c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.02568819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.030383234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.030595711Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8296ce8ef1741954265e7371e6e1973ebcb1458ea4d0692182a2d62f03e62b90/merged/etc/passwd: no such file or directory"
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.030640024Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8296ce8ef1741954265e7371e6e1973ebcb1458ea4d0692182a2d62f03e62b90/merged/etc/group: no such file or directory"
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.030921701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.061201361Z" level=info msg="Created container f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458: kube-system/storage-provisioner/storage-provisioner" id=e662c21b-e144-4be5-bb6a-8431bc71290c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.061911767Z" level=info msg="Starting container: f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458" id=854ee3f5-ed55-468f-b792-b27c582bf759 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.063885754Z" level=info msg="Started container" PID=1831 containerID=f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458 description=kube-system/storage-provisioner/storage-provisioner id=854ee3f5-ed55-468f-b792-b27c582bf759 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b3f65eca435c05442542862079e26c6f7a84ffbf642d21d3e4f616c28e71cf6c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f1b1658f5dc89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   b3f65eca435c0       storage-provisioner                                    kube-system
	ae8d7c0b72df1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   6c51ceb2b1d32       dashboard-metrics-scraper-6ffb444bf9-wh8gl             kubernetes-dashboard
	d724e8e3fe1b9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   a9f2a66b06cb7       kubernetes-dashboard-855c9754f9-68hlv                  kubernetes-dashboard
	d48100d2588a2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   83262f33bbd07       busybox                                                default
	6f1603ab1d2f4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   d3f8ec177b096       coredns-66bc5c9577-8nz5c                               kube-system
	9089887de0862       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   b3f65eca435c0       storage-provisioner                                    kube-system
	ca90c0ad17baa       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           55 seconds ago      Running             kube-proxy                  0                   9eb8a9398b69f       kube-proxy-ss2p8                                       kube-system
	f557936eef47d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           55 seconds ago      Running             kindnet-cni                 0                   d86dd91a6633f       kindnet-lsrk2                                          kube-system
	8a177f28a91aa       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           58 seconds ago      Running             kube-controller-manager     0                   d1a99616c80c1       kube-controller-manager-default-k8s-diff-port-382022   kube-system
	b89ae3816c4a8       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           58 seconds ago      Running             kube-scheduler              0                   ef61531c43f92       kube-scheduler-default-k8s-diff-port-382022            kube-system
	7b920b07dddb5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           58 seconds ago      Running             kube-apiserver              0                   1d6a213753bf6       kube-apiserver-default-k8s-diff-port-382022            kube-system
	6133fb2263ed6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   95f6c7552b601       etcd-default-k8s-diff-port-382022                      kube-system
	
	
	==> coredns [6f1603ab1d2f4c5f3c89f50a948af12dfbdf8479947f920fb97b296d0b0332fb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55726 - 35627 "HINFO IN 3268233193462241071.8687381037654626663. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045140202s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-382022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-382022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=default-k8s-diff-port-382022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_54_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:54:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-382022
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:56:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:55:47 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:55:47 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:55:47 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:55:47 +0000   Wed, 17 Dec 2025 11:54:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-382022
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                1aeb2617-3121-4d2f-838a-f21c8acff3cb
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-8nz5c                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-default-k8s-diff-port-382022                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-lsrk2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-default-k8s-diff-port-382022             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-382022    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-ss2p8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-default-k8s-diff-port-382022             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wh8gl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-68hlv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s               node-controller  Node default-k8s-diff-port-382022 event: Registered Node default-k8s-diff-port-382022 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-382022 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-382022 event: Registered Node default-k8s-diff-port-382022 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [6133fb2263ed69eedfc718e57501b70033d65802ca78d796131ff5830a512466] <==
	{"level":"warn","ts":"2025-12-17T11:55:15.677698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.689390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.708367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.718510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.728316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.738898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.752000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.768153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.778715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.790169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.800885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.812360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.838319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.852851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.863091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.875180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.884200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.905171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.914085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.923387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:16.007414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36920","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T11:56:05.675859Z","caller":"traceutil/trace.go:172","msg":"trace[262958520] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:680; }","duration":"125.975302ms","start":"2025-12-17T11:56:05.549855Z","end":"2025-12-17T11:56:05.675830Z","steps":["trace[262958520] 'read index received'  (duration: 125.965822ms)","trace[262958520] 'applied index is now lower than readState.Index'  (duration: 8.37µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:56:05.735513Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.631523ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2025-12-17T11:56:05.735656Z","caller":"traceutil/trace.go:172","msg":"trace[1207639401] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:640; }","duration":"185.784524ms","start":"2025-12-17T11:56:05.549850Z","end":"2025-12-17T11:56:05.735635Z","steps":["trace[1207639401] 'agreement among raft nodes before linearized reading'  (duration: 126.075712ms)","trace[1207639401] 'range keys from in-memory index tree'  (duration: 59.509842ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T11:56:05.735673Z","caller":"traceutil/trace.go:172","msg":"trace[462522711] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"188.26609ms","start":"2025-12-17T11:56:05.547389Z","end":"2025-12-17T11:56:05.735655Z","steps":["trace[462522711] 'process raft request'  (duration: 128.467272ms)","trace[462522711] 'compare'  (duration: 59.656655ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:56:13 up  5:38,  0 user,  load average: 13.36, 6.10, 3.28
	Linux default-k8s-diff-port-382022 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f557936eef47dbdcb08b9026a7f1a1443df08e5d35b1c935cf63462239b38e6e] <==
	I1217 11:55:17.514009       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:55:17.514270       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 11:55:17.514470       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:55:17.514500       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:55:17.514528       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:55:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:55:17.724419       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:55:17.724573       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:55:17.724602       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:55:18.011616       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:55:18.311799       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:55:18.311838       1 metrics.go:72] Registering metrics
	I1217 11:55:18.311917       1 controller.go:711] "Syncing nftables rules"
	I1217 11:55:27.724937       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:55:27.724976       1 main.go:301] handling current node
	I1217 11:55:37.727385       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:55:37.727425       1 main.go:301] handling current node
	I1217 11:55:47.724326       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:55:47.724390       1 main.go:301] handling current node
	I1217 11:55:57.725627       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:55:57.725687       1 main.go:301] handling current node
	I1217 11:56:07.733647       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:56:07.733720       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7b920b07dddb55c17343ecbdc9f777396c3b3e9c983a17164746d7f9865e23b0] <==
	I1217 11:55:16.699705       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 11:55:16.699733       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:55:16.699776       1 cache.go:39] Caches are synced for autoregister controller
	I1217 11:55:16.693452       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 11:55:16.700632       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:55:16.702934       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 11:55:16.716619       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1217 11:55:16.716775       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 11:55:16.716795       1 policy_source.go:240] refreshing policies
	I1217 11:55:16.731622       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:55:16.732280       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 11:55:16.751932       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 11:55:16.752009       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 11:55:16.946190       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:55:17.140680       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:55:17.179085       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:55:17.210967       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:55:17.233414       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:55:17.319338       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.184.161"}
	I1217 11:55:17.338313       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.59.229"}
	I1217 11:55:17.594284       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:55:20.380825       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:55:20.430250       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:55:20.479993       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:55:20.479993       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8a177f28a91aaa2beb33f612bda7e08cb55f517dc85cb28db4600fd97f28c910] <==
	I1217 11:55:20.027038       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 11:55:20.027075       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 11:55:20.027094       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 11:55:20.027146       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 11:55:20.027231       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 11:55:20.027265       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 11:55:20.027354       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 11:55:20.027237       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 11:55:20.027769       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 11:55:20.028159       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 11:55:20.031061       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 11:55:20.032284       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 11:55:20.032352       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:55:20.035745       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 11:55:20.035880       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:55:20.035955       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 11:55:20.036021       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 11:55:20.036172       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 11:55:20.038522       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 11:55:20.043847       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:55:20.045975       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 11:55:20.048262       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 11:55:20.050459       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 11:55:20.052652       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 11:55:20.054154       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	
	
	==> kube-proxy [ca90c0ad17baa36018f17a949c22eb29104e7ffb781b57997d12ed329cb8c977] <==
	I1217 11:55:17.335985       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:55:17.407897       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:55:17.508159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:55:17.508242       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 11:55:17.508358       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:55:17.527860       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:55:17.527909       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:55:17.533174       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:55:17.533557       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:55:17.533587       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:55:17.534779       1 config.go:200] "Starting service config controller"
	I1217 11:55:17.534805       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:55:17.534945       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:55:17.534971       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:55:17.535036       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:55:17.535090       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:55:17.535086       1 config.go:309] "Starting node config controller"
	I1217 11:55:17.535110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:55:17.535116       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:55:17.635444       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:55:17.635524       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:55:17.635584       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b89ae3816c4a84a75d80384f2ac0ba58aaba5961009d2b0e4689a33fd8bee8c7] <==
	I1217 11:55:15.721523       1 serving.go:386] Generated self-signed cert in-memory
	I1217 11:55:17.029148       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 11:55:17.029269       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:55:17.037430       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 11:55:17.037473       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 11:55:17.037659       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 11:55:17.037688       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:55:17.037719       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:55:17.037702       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 11:55:17.038982       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 11:55:17.039420       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 11:55:17.138205       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 11:55:17.138228       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:55:17.138352       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:55:20 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:20.666639     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d8542ad9-52b2-4cb2-8212-fbf1b12a72a3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-wh8gl\" (UID: \"d8542ad9-52b2-4cb2-8212-fbf1b12a72a3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl"
	Dec 17 11:55:20 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:20.666654     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlvrs\" (UniqueName: \"kubernetes.io/projected/bb7da256-1b37-4ce4-9985-dd068a6f4b9f-kube-api-access-tlvrs\") pod \"kubernetes-dashboard-855c9754f9-68hlv\" (UID: \"bb7da256-1b37-4ce4-9985-dd068a6f4b9f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-68hlv"
	Dec 17 11:55:25 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:25.141138     767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 11:55:26 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:26.053005     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-68hlv" podStartSLOduration=1.9509136950000001 podStartE2EDuration="6.052978099s" podCreationTimestamp="2025-12-17 11:55:20 +0000 UTC" firstStartedPulling="2025-12-17 11:55:20.889055236 +0000 UTC m=+7.146582613" lastFinishedPulling="2025-12-17 11:55:24.991119636 +0000 UTC m=+11.248647017" observedRunningTime="2025-12-17 11:55:26.052528003 +0000 UTC m=+12.310055407" watchObservedRunningTime="2025-12-17 11:55:26.052978099 +0000 UTC m=+12.310505484"
	Dec 17 11:55:27 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:27.957737     767 scope.go:117] "RemoveContainer" containerID="4b75b93cd21131ff616eba2640311ae055f51347d8a1e1ffc92215e40c9b541d"
	Dec 17 11:55:28 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:28.961783     767 scope.go:117] "RemoveContainer" containerID="4b75b93cd21131ff616eba2640311ae055f51347d8a1e1ffc92215e40c9b541d"
	Dec 17 11:55:28 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:28.961971     767 scope.go:117] "RemoveContainer" containerID="f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb"
	Dec 17 11:55:28 default-k8s-diff-port-382022 kubelet[767]: E1217 11:55:28.962206     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:55:29 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:29.966856     767 scope.go:117] "RemoveContainer" containerID="f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb"
	Dec 17 11:55:29 default-k8s-diff-port-382022 kubelet[767]: E1217 11:55:29.967077     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:55:31 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:31.783377     767 scope.go:117] "RemoveContainer" containerID="f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb"
	Dec 17 11:55:31 default-k8s-diff-port-382022 kubelet[767]: E1217 11:55:31.783571     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:55:44 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:44.883357     767 scope.go:117] "RemoveContainer" containerID="f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb"
	Dec 17 11:55:45 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:45.010595     767 scope.go:117] "RemoveContainer" containerID="f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb"
	Dec 17 11:55:45 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:45.010800     767 scope.go:117] "RemoveContainer" containerID="ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72"
	Dec 17 11:55:45 default-k8s-diff-port-382022 kubelet[767]: E1217 11:55:45.010976     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:55:48 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:48.022849     767 scope.go:117] "RemoveContainer" containerID="9089887de0862d0ff0a1ff8947345dc29948ae2624d480753c53732800ea3d73"
	Dec 17 11:55:51 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:51.782906     767 scope.go:117] "RemoveContainer" containerID="ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72"
	Dec 17 11:55:51 default-k8s-diff-port-382022 kubelet[767]: E1217 11:55:51.783123     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:56:03 default-k8s-diff-port-382022 kubelet[767]: I1217 11:56:03.883652     767 scope.go:117] "RemoveContainer" containerID="ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72"
	Dec 17 11:56:03 default-k8s-diff-port-382022 kubelet[767]: E1217 11:56:03.883856     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:56:09 default-k8s-diff-port-382022 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:56:09 default-k8s-diff-port-382022 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:56:09 default-k8s-diff-port-382022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:56:09 default-k8s-diff-port-382022 systemd[1]: kubelet.service: Consumed 1.842s CPU time.
	
	
	==> kubernetes-dashboard [d724e8e3fe1b95f0c6f317a1032d94dbd2ab888e247b9ebaabf3b73d221d53a0] <==
	2025/12/17 11:55:25 Starting overwatch
	2025/12/17 11:55:25 Using namespace: kubernetes-dashboard
	2025/12/17 11:55:25 Using in-cluster config to connect to apiserver
	2025/12/17 11:55:25 Using secret token for csrf signing
	2025/12/17 11:55:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 11:55:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 11:55:25 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 11:55:25 Generating JWE encryption key
	2025/12/17 11:55:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 11:55:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 11:55:25 Initializing JWE encryption key from synchronized object
	2025/12/17 11:55:25 Creating in-cluster Sidecar client
	2025/12/17 11:55:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 11:55:25 Serving insecurely on HTTP port: 9090
	2025/12/17 11:55:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9089887de0862d0ff0a1ff8947345dc29948ae2624d480753c53732800ea3d73] <==
	I1217 11:55:17.282601       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 11:55:47.285489       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458] <==
	I1217 11:55:48.076896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 11:55:48.084790       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:55:48.084844       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 11:55:48.086971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:51.542476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:55.803618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:59.402341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:02.456408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:05.479441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:05.544225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:56:05.544357       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:56:05.544452       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7879bd5-c601-4a2a-a916-1dac80f7bd21", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-382022_4ed51294-3b89-4f2e-9367-d593e9316d14 became leader
	I1217 11:56:05.544516       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-382022_4ed51294-3b89-4f2e-9367-d593e9316d14!
	W1217 11:56:05.547681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:56:05.644902       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-382022_4ed51294-3b89-4f2e-9367-d593e9316d14!
	W1217 11:56:05.736757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:07.741051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:07.745977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:09.750035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:09.756152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:11.759503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:11.776379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022: exit status 2 (472.79126ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-382022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-382022
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-382022:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7",
	        "Created": "2025-12-17T11:54:00.607547087Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1973171,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:55:06.227939626Z",
	            "FinishedAt": "2025-12-17T11:55:04.594955106Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/hosts",
	        "LogPath": "/var/lib/docker/containers/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7/4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7-json.log",
	        "Name": "/default-k8s-diff-port-382022",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-382022:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-382022",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b7e99a28ab9def8568d86e206b90950571102afd87e43a5568829ab65599ad7",
	                "LowerDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a-init/diff:/var/lib/docker/overlay2/20f10f0dc63c2ca18b551dbb0ba292f977cd882d774dc00faae00f5f2a145008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8d6727b605b0ee3308bc913481b5a5e9a3ee0b4df5165123b9215f196fd2f2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-382022",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-382022/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-382022",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-382022",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-382022",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "81119c253dbfcbf9dcbebd996fe93966a0aef1a7787ac99e11a7739c898e271e",
	            "SandboxKey": "/var/run/docker/netns/81119c253dbf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34641"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34642"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34645"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34643"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34644"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-382022": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "009b4cca67d182f2097fba9336c46a1ff7237dab7ad046bb8a1746aae27ee661",
	                    "EndpointID": "8fb50baf9feb56e54ab2b731a7cd9dc097c73fe62d86f927b1bdd0a04f37c436",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "b2:e8:20:49:15:aa",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-382022",
	                        "4b7e99a28ab9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022: exit status 2 (411.537076ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-382022 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-382022 logs -n 25: (1.408950439s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p embed-certs-542273                                                                                                                                              │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo systemctl cat docker --no-pager                                                                                                                │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ ssh     │ -p auto-213935 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo docker system info                                                                                                                             │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ ssh     │ -p auto-213935 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │                     │
	│ ssh     │ -p auto-213935 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo cri-dockerd --version                                                                                                                          │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │                     │
	│ delete  │ -p embed-certs-542273                                                                                                                                              │ embed-certs-542273           │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ start   │ -p calico-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-213935                │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │                     │
	│ ssh     │ -p auto-213935 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo containerd config dump                                                                                                                         │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ ssh     │ -p auto-213935 sudo crio config                                                                                                                                    │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ delete  │ -p auto-213935                                                                                                                                                     │ auto-213935                  │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ start   │ -p custom-flannel-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-213935        │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │                     │
	│ image   │ default-k8s-diff-port-382022 image list --format=json                                                                                                              │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │ 17 Dec 25 11:56 UTC │
	│ pause   │ -p default-k8s-diff-port-382022 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-382022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:56:08
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:56:08.344925 1988935 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:56:08.345075 1988935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:56:08.345089 1988935 out.go:374] Setting ErrFile to fd 2...
	I1217 11:56:08.345095 1988935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:56:08.345334 1988935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:56:08.346135 1988935 out.go:368] Setting JSON to false
	I1217 11:56:08.347903 1988935 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20313,"bootTime":1765952255,"procs":515,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:56:08.347968 1988935 start.go:143] virtualization: kvm guest
	I1217 11:56:08.351306 1988935 out.go:179] * [custom-flannel-213935] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:56:08.352805 1988935 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:56:08.352853 1988935 notify.go:221] Checking for updates...
	I1217 11:56:08.356427 1988935 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:56:08.358945 1988935 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:56:08.360191 1988935 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:56:08.361749 1988935 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:56:08.364870 1988935 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:56:08.366784 1988935 config.go:182] Loaded profile config "calico-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:56:08.366954 1988935 config.go:182] Loaded profile config "default-k8s-diff-port-382022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:56:08.367081 1988935 config.go:182] Loaded profile config "kindnet-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:56:08.367210 1988935 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:56:08.399053 1988935 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:56:08.399203 1988935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:56:08.470586 1988935 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:56:08.458571306 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:56:08.470711 1988935 docker.go:319] overlay module found
	I1217 11:56:08.476396 1988935 out.go:179] * Using the docker driver based on user configuration
	I1217 11:56:08.478158 1988935 start.go:309] selected driver: docker
	I1217 11:56:08.478183 1988935 start.go:927] validating driver "docker" against <nil>
	I1217 11:56:08.478197 1988935 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:56:08.478915 1988935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:56:08.568928 1988935 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 11:56:08.554889898 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:56:08.569137 1988935 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:56:08.569452 1988935 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:56:08.573223 1988935 out.go:179] * Using Docker driver with root privileges
	I1217 11:56:08.576436 1988935 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1217 11:56:08.576473 1988935 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1217 11:56:08.576589 1988935 start.go:353] cluster config:
	{Name:custom-flannel-213935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-213935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:56:08.578253 1988935 out.go:179] * Starting "custom-flannel-213935" primary control-plane node in "custom-flannel-213935" cluster
	I1217 11:56:08.579626 1988935 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:56:08.581022 1988935 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:56:08.583121 1988935 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:56:08.583174 1988935 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:56:08.583190 1988935 cache.go:65] Caching tarball of preloaded images
	I1217 11:56:08.583244 1988935 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:56:08.583300 1988935 preload.go:238] Found /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:56:08.583312 1988935 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 11:56:08.583454 1988935 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/custom-flannel-213935/config.json ...
	I1217 11:56:08.583479 1988935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/custom-flannel-213935/config.json: {Name:mka72be7e9c041632dd273e52b418b695908ef27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:08.611327 1988935 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:56:08.611359 1988935 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:56:08.611380 1988935 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:56:08.611420 1988935 start.go:360] acquireMachinesLock for custom-flannel-213935: {Name:mk84d31db3628ebbaa1aa118c3375083ce996c9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:56:08.611578 1988935 start.go:364] duration metric: took 131.514µs to acquireMachinesLock for "custom-flannel-213935"
	I1217 11:56:08.611609 1988935 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-213935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-213935 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:56:08.611749 1988935 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:56:07.425632 1981818 cli_runner.go:164] Run: docker network inspect kindnet-213935 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:56:07.444550 1981818 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 11:56:07.449074 1981818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:56:07.459839 1981818 kubeadm.go:884] updating cluster {Name:kindnet-213935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-213935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:56:07.459950 1981818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:56:07.459993 1981818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:56:07.494271 1981818 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:56:07.494297 1981818 crio.go:433] Images already preloaded, skipping extraction
	I1217 11:56:07.494359 1981818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:56:07.525323 1981818 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:56:07.525357 1981818 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:56:07.525367 1981818 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.3 crio true true} ...
	I1217 11:56:07.525486 1981818 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-213935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kindnet-213935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1217 11:56:07.525620 1981818 ssh_runner.go:195] Run: crio config
	I1217 11:56:07.582036 1981818 cni.go:84] Creating CNI manager for "kindnet"
	I1217 11:56:07.582066 1981818 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:56:07.582088 1981818 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-213935 NodeName:kindnet-213935 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:56:07.582254 1981818 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-213935"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:56:07.582378 1981818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:56:07.595388 1981818 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:56:07.595468 1981818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:56:07.605813 1981818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1217 11:56:07.619994 1981818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:56:07.644245 1981818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1217 11:56:07.659595 1981818 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:56:07.664027 1981818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:56:07.676581 1981818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:07.779015 1981818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:56:07.801458 1981818 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935 for IP: 192.168.103.2
	I1217 11:56:07.801483 1981818 certs.go:195] generating shared ca certs ...
	I1217 11:56:07.801506 1981818 certs.go:227] acquiring lock for ca certs: {Name:mke6f8ead332a9a461d6e58c21494c63e9cda57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:07.801712 1981818 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key
	I1217 11:56:07.801757 1981818 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key
	I1217 11:56:07.801767 1981818 certs.go:257] generating profile certs ...
	I1217 11:56:07.801834 1981818 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.key
	I1217 11:56:07.801859 1981818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.crt with IP's: []
	I1217 11:56:07.935179 1981818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.crt ...
	I1217 11:56:07.935211 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.crt: {Name:mkddca328a9dee388f7d0ad7d7368d6f017e075b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:07.935393 1981818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.key ...
	I1217 11:56:07.935409 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/client.key: {Name:mk5e548c655398bb77a17227b59bb035919e8cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:07.935520 1981818 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key.39338b10
	I1217 11:56:07.935558 1981818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt.39338b10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 11:56:08.008227 1981818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt.39338b10 ...
	I1217 11:56:08.008254 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt.39338b10: {Name:mk0ef44f36c69551520f3343a7f6b87c746bdbd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:08.008431 1981818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key.39338b10 ...
	I1217 11:56:08.008449 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key.39338b10: {Name:mk5834436567a607e5721a8ff14466228ff67460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:08.008582 1981818 certs.go:382] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt.39338b10 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt
	I1217 11:56:08.008722 1981818 certs.go:386] copying /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key.39338b10 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key
	I1217 11:56:08.008806 1981818 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.key
	I1217 11:56:08.008825 1981818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.crt with IP's: []
	I1217 11:56:08.035302 1981818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.crt ...
	I1217 11:56:08.035341 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.crt: {Name:mk79220f228113a7cf44063d666b3e0c7bd65f48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:08.035634 1981818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.key ...
	I1217 11:56:08.035671 1981818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.key: {Name:mkdada61fd2bb1ac3ea77f9bf632ba8a3672f55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:08.035988 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:56:08.036084 1981818 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:56:08.036112 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:56:08.036170 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:56:08.036219 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:56:08.036272 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:56:08.036349 1981818 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:56:08.037382 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:56:08.073421 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:56:08.100428 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:56:08.141629 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 11:56:08.164403 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 11:56:08.188306 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 11:56:08.208451 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:56:08.230221 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kindnet-213935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:56:08.251204 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:56:08.271253 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:56:08.291962 1981818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:56:08.312857 1981818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:56:08.327583 1981818 ssh_runner.go:195] Run: openssl version
	I1217 11:56:08.337057 1981818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:56:08.347349 1981818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:56:08.356625 1981818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:56:08.361092 1981818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:56:08.361150 1981818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:56:08.407263 1981818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:56:08.417377 1981818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:56:08.429990 1981818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:56:08.441809 1981818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:56:08.447702 1981818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:56:08.447768 1981818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:56:08.491430 1981818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:56:08.503757 1981818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:08.523126 1981818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:56:08.535081 1981818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:08.541519 1981818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:08.541646 1981818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:08.595217 1981818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:56:08.605744 1981818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:56:08.611428 1981818 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:56:08.611488 1981818 kubeadm.go:401] StartCluster: {Name:kindnet-213935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-213935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:56:08.611601 1981818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:56:08.611656 1981818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:56:08.646309 1981818 cri.go:89] found id: ""
	I1217 11:56:08.646385 1981818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:56:08.656208 1981818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:56:08.664858 1981818 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:56:08.664929 1981818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:56:08.674072 1981818 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:56:08.674091 1981818 kubeadm.go:158] found existing configuration files:
	
	I1217 11:56:08.674143 1981818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:56:08.683988 1981818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:56:08.684044 1981818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:56:08.693069 1981818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:56:08.702777 1981818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:56:08.702849 1981818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:56:08.713654 1981818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:56:08.723291 1981818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:56:08.723355 1981818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:56:08.732294 1981818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:56:08.742381 1981818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:56:08.742441 1981818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:56:08.751794 1981818 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:56:08.797495 1981818 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 11:56:08.797596 1981818 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:56:08.823947 1981818 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:56:08.824056 1981818 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 11:56:08.824118 1981818 kubeadm.go:319] OS: Linux
	I1217 11:56:08.824185 1981818 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:56:08.824249 1981818 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:56:08.824335 1981818 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:56:08.824419 1981818 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:56:08.824501 1981818 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:56:08.824613 1981818 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:56:08.824696 1981818 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:56:08.824765 1981818 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 11:56:08.904923 1981818 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:56:08.905070 1981818 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:56:08.905196 1981818 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:56:08.914156 1981818 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:56:08.916610 1981818 out.go:252]   - Generating certificates and keys ...
	I1217 11:56:08.916747 1981818 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:56:08.916863 1981818 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:56:09.280063 1981818 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:56:09.502377 1981818 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:56:10.363874 1981818 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:56:10.785724 1981818 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:56:07.653299 1986025 cli_runner.go:164] Run: docker container inspect calico-213935 --format={{.State.Running}}
	I1217 11:56:07.675524 1986025 cli_runner.go:164] Run: docker container inspect calico-213935 --format={{.State.Status}}
	I1217 11:56:07.697109 1986025 cli_runner.go:164] Run: docker exec calico-213935 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:56:07.752565 1986025 oci.go:144] the created container "calico-213935" has a running status.
	I1217 11:56:07.752600 1986025 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/calico-213935/id_rsa...
	I1217 11:56:07.921720 1986025 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/calico-213935/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 11:56:08.052487 1986025 cli_runner.go:164] Run: docker container inspect calico-213935 --format={{.State.Status}}
	I1217 11:56:08.087029 1986025 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:56:08.087064 1986025 kic_runner.go:114] Args: [docker exec --privileged calico-213935 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:56:08.145074 1986025 cli_runner.go:164] Run: docker container inspect calico-213935 --format={{.State.Status}}
	I1217 11:56:08.167768 1986025 machine.go:94] provisionDockerMachine start ...
	I1217 11:56:08.167879 1986025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-213935
	I1217 11:56:08.191582 1986025 main.go:143] libmachine: Using SSH client type: native
	I1217 11:56:08.191980 1986025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34651 <nil> <nil>}
	I1217 11:56:08.192014 1986025 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:56:08.326933 1986025 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-213935
	
	I1217 11:56:08.326965 1986025 ubuntu.go:182] provisioning hostname "calico-213935"
	I1217 11:56:08.327047 1986025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-213935
	I1217 11:56:08.350515 1986025 main.go:143] libmachine: Using SSH client type: native
	I1217 11:56:08.350865 1986025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34651 <nil> <nil>}
	I1217 11:56:08.350887 1986025 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-213935 && echo "calico-213935" | sudo tee /etc/hostname
	I1217 11:56:08.506898 1986025 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-213935
	
	I1217 11:56:08.508958 1986025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-213935
	I1217 11:56:08.538289 1986025 main.go:143] libmachine: Using SSH client type: native
	I1217 11:56:08.538792 1986025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34651 <nil> <nil>}
	I1217 11:56:08.538844 1986025 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-213935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-213935/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-213935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:56:08.682989 1986025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:56:08.683017 1986025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1669348/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1669348/.minikube}
	I1217 11:56:08.683061 1986025 ubuntu.go:190] setting up certificates
	I1217 11:56:08.683075 1986025 provision.go:84] configureAuth start
	I1217 11:56:08.683142 1986025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-213935
	I1217 11:56:08.706822 1986025 provision.go:143] copyHostCerts
	I1217 11:56:08.706889 1986025 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem, removing ...
	I1217 11:56:08.706900 1986025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem
	I1217 11:56:08.706974 1986025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/cert.pem (1123 bytes)
	I1217 11:56:08.707101 1986025 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem, removing ...
	I1217 11:56:08.707116 1986025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem
	I1217 11:56:08.707164 1986025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/key.pem (1679 bytes)
	I1217 11:56:08.707267 1986025 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem, removing ...
	I1217 11:56:08.707275 1986025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem
	I1217 11:56:08.707313 1986025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.pem (1078 bytes)
	I1217 11:56:08.707413 1986025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem org=jenkins.calico-213935 san=[127.0.0.1 192.168.94.2 calico-213935 localhost minikube]
	I1217 11:56:08.757336 1986025 provision.go:177] copyRemoteCerts
	I1217 11:56:08.757402 1986025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:56:08.757452 1986025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-213935
	I1217 11:56:08.780387 1986025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34651 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/calico-213935/id_rsa Username:docker}
	I1217 11:56:08.885310 1986025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 11:56:08.914961 1986025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 11:56:08.938811 1986025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:56:08.963747 1986025 provision.go:87] duration metric: took 280.651738ms to configureAuth
	I1217 11:56:08.963790 1986025 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:56:08.963989 1986025 config.go:182] Loaded profile config "calico-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:56:08.964098 1986025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-213935
	I1217 11:56:08.986930 1986025 main.go:143] libmachine: Using SSH client type: native
	I1217 11:56:08.987236 1986025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 34651 <nil> <nil>}
	I1217 11:56:08.987268 1986025 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:56:09.301775 1986025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:56:09.301809 1986025 machine.go:97] duration metric: took 1.134009549s to provisionDockerMachine
	I1217 11:56:09.301822 1986025 client.go:176] duration metric: took 6.455343414s to LocalClient.Create
	I1217 11:56:09.301843 1986025 start.go:167] duration metric: took 6.455423353s to libmachine.API.Create "calico-213935"
	I1217 11:56:09.301853 1986025 start.go:293] postStartSetup for "calico-213935" (driver="docker")
	I1217 11:56:09.301866 1986025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:56:09.301930 1986025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:56:09.301978 1986025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-213935
	I1217 11:56:09.328175 1986025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34651 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/calico-213935/id_rsa Username:docker}
	I1217 11:56:09.431590 1986025 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:56:09.436266 1986025 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:56:09.436297 1986025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:56:09.436310 1986025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/addons for local assets ...
	I1217 11:56:09.436401 1986025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1669348/.minikube/files for local assets ...
	I1217 11:56:09.436491 1986025 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem -> 16729412.pem in /etc/ssl/certs
	I1217 11:56:09.436639 1986025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:56:09.446632 1986025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:56:09.469889 1986025 start.go:296] duration metric: took 168.01675ms for postStartSetup
	I1217 11:56:09.470332 1986025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-213935
	I1217 11:56:09.492895 1986025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/calico-213935/config.json ...
	I1217 11:56:09.493316 1986025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:56:09.493376 1986025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-213935
	I1217 11:56:09.516661 1986025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34651 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/calico-213935/id_rsa Username:docker}
	I1217 11:56:09.609577 1986025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:56:09.614622 1986025 start.go:128] duration metric: took 6.770513378s to createHost
	I1217 11:56:09.614653 1986025 start.go:83] releasing machines lock for "calico-213935", held for 6.770668155s
	I1217 11:56:09.614770 1986025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-213935
	I1217 11:56:09.634972 1986025 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem (1338 bytes)
	W1217 11:56:09.635025 1986025 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941_empty.pem, impossibly tiny 0 bytes
	I1217 11:56:09.635035 1986025 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:56:09.635057 1986025 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem (1078 bytes)
	I1217 11:56:09.635081 1986025 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:56:09.635104 1986025 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/key.pem (1679 bytes)
	I1217 11:56:09.635151 1986025 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem (1708 bytes)
	I1217 11:56:09.635230 1986025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/1672941.pem --> /usr/share/ca-certificates/1672941.pem (1338 bytes)
	I1217 11:56:09.635281 1986025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-213935
	I1217 11:56:09.653295 1986025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34651 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/calico-213935/id_rsa Username:docker}
	I1217 11:56:09.763359 1986025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/ssl/certs/16729412.pem --> /usr/share/ca-certificates/16729412.pem (1708 bytes)
	I1217 11:56:09.782654 1986025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:56:09.801146 1986025 ssh_runner.go:195] Run: openssl version
	I1217 11:56:09.807822 1986025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16729412.pem
	I1217 11:56:09.816670 1986025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16729412.pem /etc/ssl/certs/16729412.pem
	I1217 11:56:09.825352 1986025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16729412.pem
	I1217 11:56:09.830134 1986025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:23 /usr/share/ca-certificates/16729412.pem
	I1217 11:56:09.830211 1986025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16729412.pem
	I1217 11:56:09.872725 1986025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:56:09.883706 1986025 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16729412.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:56:09.894560 1986025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:09.903643 1986025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:56:09.912776 1986025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:09.917284 1986025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:09.917354 1986025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:09.957164 1986025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:56:09.965646 1986025 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:56:09.973793 1986025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1672941.pem
	I1217 11:56:09.981958 1986025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1672941.pem /etc/ssl/certs/1672941.pem
	I1217 11:56:09.990218 1986025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672941.pem
	I1217 11:56:09.994392 1986025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:23 /usr/share/ca-certificates/1672941.pem
	I1217 11:56:09.994454 1986025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672941.pem
	I1217 11:56:10.030360 1986025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:56:10.039174 1986025 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1672941.pem /etc/ssl/certs/51391683.0
	I1217 11:56:10.048303 1986025 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 11:56:10.053041 1986025 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 11:56:10.057309 1986025 ssh_runner.go:195] Run: cat /version.json
	I1217 11:56:10.057353 1986025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:56:10.117569 1986025 ssh_runner.go:195] Run: systemctl --version
	I1217 11:56:10.124517 1986025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:56:10.166184 1986025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:56:10.171224 1986025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:56:10.171330 1986025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:56:10.203823 1986025 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 11:56:10.203855 1986025 start.go:496] detecting cgroup driver to use...
	I1217 11:56:10.203895 1986025 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 11:56:10.203948 1986025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:56:10.222351 1986025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:56:10.236859 1986025 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:56:10.236932 1986025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:56:10.255032 1986025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:56:10.275821 1986025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:56:10.367140 1986025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:56:10.482129 1986025 docker.go:234] disabling docker service ...
	I1217 11:56:10.482196 1986025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:56:10.505504 1986025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:56:10.525157 1986025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:56:10.624042 1986025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:56:10.724014 1986025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:56:10.741663 1986025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:56:10.760996 1986025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:56:10.761071 1986025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:56:10.773411 1986025 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 11:56:10.773482 1986025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:56:10.784382 1986025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:56:10.795685 1986025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:56:10.810685 1986025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:56:10.822821 1986025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:56:10.835491 1986025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:56:10.851150 1986025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:56:10.861816 1986025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:56:10.870879 1986025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:56:10.880066 1986025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:10.991167 1986025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:56:11.020085 1981818 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:56:11.020395 1981818 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-213935 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 11:56:11.149444 1981818 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:56:11.149704 1981818 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-213935 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 11:56:11.213436 1981818 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:56:11.252355 1981818 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:56:11.660452 1981818 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:56:11.660563 1981818 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:56:12.113071 1981818 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:56:12.196302 1981818 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:56:12.323508 1981818 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:56:12.504709 1981818 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:56:12.637128 1981818 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:56:12.637705 1981818 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:56:12.699840 1981818 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:56:13.085545 1986025 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.09432986s)
	I1217 11:56:13.085583 1986025 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:56:13.085636 1986025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:56:13.090637 1986025 start.go:564] Will wait 60s for crictl version
	I1217 11:56:13.090735 1986025 ssh_runner.go:195] Run: which crictl
	I1217 11:56:13.095000 1986025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:56:13.127892 1986025 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 11:56:13.128026 1986025 ssh_runner.go:195] Run: crio --version
	I1217 11:56:13.165609 1986025 ssh_runner.go:195] Run: crio --version
	I1217 11:56:13.204008 1986025 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 11:56:08.613921 1988935 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:56:08.614199 1988935 start.go:159] libmachine.API.Create for "custom-flannel-213935" (driver="docker")
	I1217 11:56:08.614241 1988935 client.go:173] LocalClient.Create starting
	I1217 11:56:08.614373 1988935 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/ca.pem
	I1217 11:56:08.614420 1988935 main.go:143] libmachine: Decoding PEM data...
	I1217 11:56:08.614451 1988935 main.go:143] libmachine: Parsing certificate...
	I1217 11:56:08.614551 1988935 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1669348/.minikube/certs/cert.pem
	I1217 11:56:08.614584 1988935 main.go:143] libmachine: Decoding PEM data...
	I1217 11:56:08.614603 1988935 main.go:143] libmachine: Parsing certificate...
	I1217 11:56:08.615041 1988935 cli_runner.go:164] Run: docker network inspect custom-flannel-213935 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:56:08.638126 1988935 cli_runner.go:211] docker network inspect custom-flannel-213935 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:56:08.638220 1988935 network_create.go:284] running [docker network inspect custom-flannel-213935] to gather additional debugging logs...
	I1217 11:56:08.638247 1988935 cli_runner.go:164] Run: docker network inspect custom-flannel-213935
	W1217 11:56:08.658756 1988935 cli_runner.go:211] docker network inspect custom-flannel-213935 returned with exit code 1
	I1217 11:56:08.658785 1988935 network_create.go:287] error running [docker network inspect custom-flannel-213935]: docker network inspect custom-flannel-213935: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-213935 not found
	I1217 11:56:08.658802 1988935 network_create.go:289] output of [docker network inspect custom-flannel-213935]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-213935 not found
	
	** /stderr **
	I1217 11:56:08.659089 1988935 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:56:08.680025 1988935 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3d92c06bf7e1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:dc:f5:1a:95:c6} reservation:<nil>}
	I1217 11:56:08.680975 1988935 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e34a3db6b97 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:b3:69:9a:9a:9f} reservation:<nil>}
	I1217 11:56:08.681768 1988935 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d8460370d724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:bb:68:9a:9d:ac} reservation:<nil>}
	I1217 11:56:08.682409 1988935 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-009b4cca67d1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:77:e4:db:4d:bd} reservation:<nil>}
	I1217 11:56:08.683647 1988935 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020edae0}
	I1217 11:56:08.683682 1988935 network_create.go:124] attempt to create docker network custom-flannel-213935 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 11:56:08.683741 1988935 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-213935 custom-flannel-213935
	I1217 11:56:08.741099 1988935 network_create.go:108] docker network custom-flannel-213935 192.168.85.0/24 created
	I1217 11:56:08.741142 1988935 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-213935" container
	I1217 11:56:08.741197 1988935 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:56:08.763355 1988935 cli_runner.go:164] Run: docker volume create custom-flannel-213935 --label name.minikube.sigs.k8s.io=custom-flannel-213935 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:56:08.786351 1988935 oci.go:103] Successfully created a docker volume custom-flannel-213935
	I1217 11:56:08.786437 1988935 cli_runner.go:164] Run: docker run --rm --name custom-flannel-213935-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-213935 --entrypoint /usr/bin/test -v custom-flannel-213935:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:56:09.247423 1988935 oci.go:107] Successfully prepared a docker volume custom-flannel-213935
	I1217 11:56:09.247509 1988935 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:56:09.247525 1988935 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:56:09.247609 1988935 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-213935:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:56:13.080799 1988935 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-213935:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.833119231s)
	I1217 11:56:13.080837 1988935 kic.go:203] duration metric: took 3.833306511s to extract preloaded images to volume ...
	W1217 11:56:13.080920 1988935 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 11:56:13.081073 1988935 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 11:56:13.081118 1988935 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:56:13.149831 1988935 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-213935 --name custom-flannel-213935 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-213935 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-213935 --network custom-flannel-213935 --ip 192.168.85.2 --volume custom-flannel-213935:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	
	
	==> CRI-O <==
	Dec 17 11:55:28 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:28.025401187Z" level=info msg="Started container" PID=1806 containerID=f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper id=b9c6a73b-0b7c-4be2-a85d-2f73d9744aad name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c51ceb2b1d32338f753599e86faa696c5640ab62ee64fff356d5cb58e3926cb
	Dec 17 11:55:28 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:28.9636027Z" level=info msg="Removing container: 4b75b93cd21131ff616eba2640311ae055f51347d8a1e1ffc92215e40c9b541d" id=a3f697c2-4e65-4659-8394-2c55838e736a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:28 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:28.974298735Z" level=info msg="Removed container 4b75b93cd21131ff616eba2640311ae055f51347d8a1e1ffc92215e40c9b541d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper" id=a3f697c2-4e65-4659-8394-2c55838e736a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.883978429Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7cb77da8-fa3d-417f-ba01-cdff6a9796f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.885038861Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=28af450d-1a1d-4854-ad5e-1f5066048947 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.886250164Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper" id=6448aa0d-fbd0-413f-9942-03968c39791d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.886482603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.892334969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.89302041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.927152222Z" level=info msg="Created container ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper" id=6448aa0d-fbd0-413f-9942-03968c39791d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.927819595Z" level=info msg="Starting container: ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72" id=d7755200-e3e1-4c38-8e97-1e3310fd0324 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:44 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:44.92952515Z" level=info msg="Started container" PID=1817 containerID=ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper id=d7755200-e3e1-4c38-8e97-1e3310fd0324 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c51ceb2b1d32338f753599e86faa696c5640ab62ee64fff356d5cb58e3926cb
	Dec 17 11:55:45 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:45.012000192Z" level=info msg="Removing container: f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb" id=eb03c949-7110-4eca-afa8-45f5c6c37f20 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:45 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:45.021879324Z" level=info msg="Removed container f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl/dashboard-metrics-scraper" id=eb03c949-7110-4eca-afa8-45f5c6c37f20 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.023246378Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=129c6fca-719c-4ff8-bb7c-772fffb5b960 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.024377123Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3fa48443-ee7c-46aa-bbc0-ffb484d7de8d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.02549083Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e662c21b-e144-4be5-bb6a-8431bc71290c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.02568819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.030383234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.030595711Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8296ce8ef1741954265e7371e6e1973ebcb1458ea4d0692182a2d62f03e62b90/merged/etc/passwd: no such file or directory"
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.030640024Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8296ce8ef1741954265e7371e6e1973ebcb1458ea4d0692182a2d62f03e62b90/merged/etc/group: no such file or directory"
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.030921701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.061201361Z" level=info msg="Created container f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458: kube-system/storage-provisioner/storage-provisioner" id=e662c21b-e144-4be5-bb6a-8431bc71290c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.061911767Z" level=info msg="Starting container: f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458" id=854ee3f5-ed55-468f-b792-b27c582bf759 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 11:55:48 default-k8s-diff-port-382022 crio[606]: time="2025-12-17T11:55:48.063885754Z" level=info msg="Started container" PID=1831 containerID=f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458 description=kube-system/storage-provisioner/storage-provisioner id=854ee3f5-ed55-468f-b792-b27c582bf759 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b3f65eca435c05442542862079e26c6f7a84ffbf642d21d3e4f616c28e71cf6c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f1b1658f5dc89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   b3f65eca435c0       storage-provisioner                                    kube-system
	ae8d7c0b72df1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   6c51ceb2b1d32       dashboard-metrics-scraper-6ffb444bf9-wh8gl             kubernetes-dashboard
	d724e8e3fe1b9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago       Running             kubernetes-dashboard        0                   a9f2a66b06cb7       kubernetes-dashboard-855c9754f9-68hlv                  kubernetes-dashboard
	d48100d2588a2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   83262f33bbd07       busybox                                                default
	6f1603ab1d2f4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     0                   d3f8ec177b096       coredns-66bc5c9577-8nz5c                               kube-system
	9089887de0862       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   b3f65eca435c0       storage-provisioner                                    kube-system
	ca90c0ad17baa       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           58 seconds ago       Running             kube-proxy                  0                   9eb8a9398b69f       kube-proxy-ss2p8                                       kube-system
	f557936eef47d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           58 seconds ago       Running             kindnet-cni                 0                   d86dd91a6633f       kindnet-lsrk2                                          kube-system
	8a177f28a91aa       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           About a minute ago   Running             kube-controller-manager     0                   d1a99616c80c1       kube-controller-manager-default-k8s-diff-port-382022   kube-system
	b89ae3816c4a8       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           About a minute ago   Running             kube-scheduler              0                   ef61531c43f92       kube-scheduler-default-k8s-diff-port-382022            kube-system
	7b920b07dddb5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           About a minute ago   Running             kube-apiserver              0                   1d6a213753bf6       kube-apiserver-default-k8s-diff-port-382022            kube-system
	6133fb2263ed6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   95f6c7552b601       etcd-default-k8s-diff-port-382022                      kube-system
	
	
	==> coredns [6f1603ab1d2f4c5f3c89f50a948af12dfbdf8479947f920fb97b296d0b0332fb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55726 - 35627 "HINFO IN 3268233193462241071.8687381037654626663. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045140202s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-382022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-382022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=default-k8s-diff-port-382022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_54_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:54:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-382022
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:56:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:55:47 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:55:47 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:55:47 +0000   Wed, 17 Dec 2025 11:54:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:55:47 +0000   Wed, 17 Dec 2025 11:54:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-382022
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                1aeb2617-3121-4d2f-838a-f21c8acff3cb
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-8nz5c                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-default-k8s-diff-port-382022                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-lsrk2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-default-k8s-diff-port-382022             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-382022    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-ss2p8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-default-k8s-diff-port-382022             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wh8gl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-68hlv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s               node-controller  Node default-k8s-diff-port-382022 event: Registered Node default-k8s-diff-port-382022 in Controller
	  Normal  NodeReady                101s               kubelet          Node default-k8s-diff-port-382022 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-382022 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node default-k8s-diff-port-382022 event: Registered Node default-k8s-diff-port-382022 in Controller
	
	
	==> dmesg <==
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 6a 9b 8a 10 9d b0 08 06
	[  +0.000354] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 5c d5 97 aa 82 08 06
	[Dec17 11:17] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.027018] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023877] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023972] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023891] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +1.023907] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +2.047850] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +4.031718] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[  +8.191427] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[ +16.382789] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	[Dec17 11:18] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 95 e1 fc 04 bd e2 f4 76 50 a0 0a 08 00
	
	
	==> etcd [6133fb2263ed69eedfc718e57501b70033d65802ca78d796131ff5830a512466] <==
	{"level":"warn","ts":"2025-12-17T11:55:15.677698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.689390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.708367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.718510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.728316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.738898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.752000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.768153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.778715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.790169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.800885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.812360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.838319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.852851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.863091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.875180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.884200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.905171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.914085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:15.923387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T11:55:16.007414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36920","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T11:56:05.675859Z","caller":"traceutil/trace.go:172","msg":"trace[262958520] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:680; }","duration":"125.975302ms","start":"2025-12-17T11:56:05.549855Z","end":"2025-12-17T11:56:05.675830Z","steps":["trace[262958520] 'read index received'  (duration: 125.965822ms)","trace[262958520] 'applied index is now lower than readState.Index'  (duration: 8.37µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:56:05.735513Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.631523ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2025-12-17T11:56:05.735656Z","caller":"traceutil/trace.go:172","msg":"trace[1207639401] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:640; }","duration":"185.784524ms","start":"2025-12-17T11:56:05.549850Z","end":"2025-12-17T11:56:05.735635Z","steps":["trace[1207639401] 'agreement among raft nodes before linearized reading'  (duration: 126.075712ms)","trace[1207639401] 'range keys from in-memory index tree'  (duration: 59.509842ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T11:56:05.735673Z","caller":"traceutil/trace.go:172","msg":"trace[462522711] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"188.26609ms","start":"2025-12-17T11:56:05.547389Z","end":"2025-12-17T11:56:05.735655Z","steps":["trace[462522711] 'process raft request'  (duration: 128.467272ms)","trace[462522711] 'compare'  (duration: 59.656655ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:56:15 up  5:38,  0 user,  load average: 12.61, 6.06, 3.28
	Linux default-k8s-diff-port-382022 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f557936eef47dbdcb08b9026a7f1a1443df08e5d35b1c935cf63462239b38e6e] <==
	I1217 11:55:17.514009       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 11:55:17.514270       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 11:55:17.514470       1 main.go:148] setting mtu 1500 for CNI 
	I1217 11:55:17.514500       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 11:55:17.514528       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T11:55:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 11:55:17.724419       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 11:55:17.724573       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 11:55:17.724602       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 11:55:18.011616       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 11:55:18.311799       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 11:55:18.311838       1 metrics.go:72] Registering metrics
	I1217 11:55:18.311917       1 controller.go:711] "Syncing nftables rules"
	I1217 11:55:27.724937       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:55:27.724976       1 main.go:301] handling current node
	I1217 11:55:37.727385       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:55:37.727425       1 main.go:301] handling current node
	I1217 11:55:47.724326       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:55:47.724390       1 main.go:301] handling current node
	I1217 11:55:57.725627       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:55:57.725687       1 main.go:301] handling current node
	I1217 11:56:07.733647       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 11:56:07.733720       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7b920b07dddb55c17343ecbdc9f777396c3b3e9c983a17164746d7f9865e23b0] <==
	I1217 11:55:16.699705       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 11:55:16.699733       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 11:55:16.699776       1 cache.go:39] Caches are synced for autoregister controller
	I1217 11:55:16.693452       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 11:55:16.700632       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 11:55:16.702934       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 11:55:16.716619       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1217 11:55:16.716775       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 11:55:16.716795       1 policy_source.go:240] refreshing policies
	I1217 11:55:16.731622       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 11:55:16.732280       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 11:55:16.751932       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 11:55:16.752009       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 11:55:16.946190       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 11:55:17.140680       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 11:55:17.179085       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 11:55:17.210967       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 11:55:17.233414       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 11:55:17.319338       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.184.161"}
	I1217 11:55:17.338313       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.59.229"}
	I1217 11:55:17.594284       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 11:55:20.380825       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 11:55:20.430250       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 11:55:20.479993       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 11:55:20.479993       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8a177f28a91aaa2beb33f612bda7e08cb55f517dc85cb28db4600fd97f28c910] <==
	I1217 11:55:20.027038       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 11:55:20.027075       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 11:55:20.027094       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 11:55:20.027146       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 11:55:20.027231       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 11:55:20.027265       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 11:55:20.027354       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 11:55:20.027237       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 11:55:20.027769       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 11:55:20.028159       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 11:55:20.031061       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 11:55:20.032284       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 11:55:20.032352       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:55:20.035745       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 11:55:20.035880       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:55:20.035955       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 11:55:20.036021       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 11:55:20.036172       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 11:55:20.038522       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 11:55:20.043847       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 11:55:20.045975       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 11:55:20.048262       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 11:55:20.050459       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 11:55:20.052652       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 11:55:20.054154       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	
	
	==> kube-proxy [ca90c0ad17baa36018f17a949c22eb29104e7ffb781b57997d12ed329cb8c977] <==
	I1217 11:55:17.335985       1 server_linux.go:53] "Using iptables proxy"
	I1217 11:55:17.407897       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:55:17.508159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:55:17.508242       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 11:55:17.508358       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:55:17.527860       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 11:55:17.527909       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:55:17.533174       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:55:17.533557       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:55:17.533587       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:55:17.534779       1 config.go:200] "Starting service config controller"
	I1217 11:55:17.534805       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:55:17.534945       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:55:17.534971       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:55:17.535036       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:55:17.535090       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:55:17.535086       1 config.go:309] "Starting node config controller"
	I1217 11:55:17.535110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:55:17.535116       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:55:17.635444       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 11:55:17.635524       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:55:17.635584       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b89ae3816c4a84a75d80384f2ac0ba58aaba5961009d2b0e4689a33fd8bee8c7] <==
	I1217 11:55:15.721523       1 serving.go:386] Generated self-signed cert in-memory
	I1217 11:55:17.029148       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 11:55:17.029269       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:55:17.037430       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 11:55:17.037473       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 11:55:17.037659       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 11:55:17.037688       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:55:17.037719       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:55:17.037702       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 11:55:17.038982       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 11:55:17.039420       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 11:55:17.138205       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 11:55:17.138228       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 11:55:17.138352       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:55:20 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:20.666639     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d8542ad9-52b2-4cb2-8212-fbf1b12a72a3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-wh8gl\" (UID: \"d8542ad9-52b2-4cb2-8212-fbf1b12a72a3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl"
	Dec 17 11:55:20 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:20.666654     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlvrs\" (UniqueName: \"kubernetes.io/projected/bb7da256-1b37-4ce4-9985-dd068a6f4b9f-kube-api-access-tlvrs\") pod \"kubernetes-dashboard-855c9754f9-68hlv\" (UID: \"bb7da256-1b37-4ce4-9985-dd068a6f4b9f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-68hlv"
	Dec 17 11:55:25 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:25.141138     767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 11:55:26 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:26.053005     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-68hlv" podStartSLOduration=1.9509136950000001 podStartE2EDuration="6.052978099s" podCreationTimestamp="2025-12-17 11:55:20 +0000 UTC" firstStartedPulling="2025-12-17 11:55:20.889055236 +0000 UTC m=+7.146582613" lastFinishedPulling="2025-12-17 11:55:24.991119636 +0000 UTC m=+11.248647017" observedRunningTime="2025-12-17 11:55:26.052528003 +0000 UTC m=+12.310055407" watchObservedRunningTime="2025-12-17 11:55:26.052978099 +0000 UTC m=+12.310505484"
	Dec 17 11:55:27 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:27.957737     767 scope.go:117] "RemoveContainer" containerID="4b75b93cd21131ff616eba2640311ae055f51347d8a1e1ffc92215e40c9b541d"
	Dec 17 11:55:28 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:28.961783     767 scope.go:117] "RemoveContainer" containerID="4b75b93cd21131ff616eba2640311ae055f51347d8a1e1ffc92215e40c9b541d"
	Dec 17 11:55:28 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:28.961971     767 scope.go:117] "RemoveContainer" containerID="f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb"
	Dec 17 11:55:28 default-k8s-diff-port-382022 kubelet[767]: E1217 11:55:28.962206     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:55:29 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:29.966856     767 scope.go:117] "RemoveContainer" containerID="f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb"
	Dec 17 11:55:29 default-k8s-diff-port-382022 kubelet[767]: E1217 11:55:29.967077     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:55:31 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:31.783377     767 scope.go:117] "RemoveContainer" containerID="f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb"
	Dec 17 11:55:31 default-k8s-diff-port-382022 kubelet[767]: E1217 11:55:31.783571     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:55:44 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:44.883357     767 scope.go:117] "RemoveContainer" containerID="f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb"
	Dec 17 11:55:45 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:45.010595     767 scope.go:117] "RemoveContainer" containerID="f321cd9fcc9f4e1ac3e551d2eb50a9b51554adea2f56837aee55cd69b70adcdb"
	Dec 17 11:55:45 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:45.010800     767 scope.go:117] "RemoveContainer" containerID="ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72"
	Dec 17 11:55:45 default-k8s-diff-port-382022 kubelet[767]: E1217 11:55:45.010976     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:55:48 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:48.022849     767 scope.go:117] "RemoveContainer" containerID="9089887de0862d0ff0a1ff8947345dc29948ae2624d480753c53732800ea3d73"
	Dec 17 11:55:51 default-k8s-diff-port-382022 kubelet[767]: I1217 11:55:51.782906     767 scope.go:117] "RemoveContainer" containerID="ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72"
	Dec 17 11:55:51 default-k8s-diff-port-382022 kubelet[767]: E1217 11:55:51.783123     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:56:03 default-k8s-diff-port-382022 kubelet[767]: I1217 11:56:03.883652     767 scope.go:117] "RemoveContainer" containerID="ae8d7c0b72df157c02570a4d6d79ba210d1b36bdbb2a7b4c9feaaff25b206f72"
	Dec 17 11:56:03 default-k8s-diff-port-382022 kubelet[767]: E1217 11:56:03.883856     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wh8gl_kubernetes-dashboard(d8542ad9-52b2-4cb2-8212-fbf1b12a72a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wh8gl" podUID="d8542ad9-52b2-4cb2-8212-fbf1b12a72a3"
	Dec 17 11:56:09 default-k8s-diff-port-382022 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 11:56:09 default-k8s-diff-port-382022 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 11:56:09 default-k8s-diff-port-382022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:56:09 default-k8s-diff-port-382022 systemd[1]: kubelet.service: Consumed 1.842s CPU time.
	
	
	==> kubernetes-dashboard [d724e8e3fe1b95f0c6f317a1032d94dbd2ab888e247b9ebaabf3b73d221d53a0] <==
	2025/12/17 11:55:25 Starting overwatch
	2025/12/17 11:55:25 Using namespace: kubernetes-dashboard
	2025/12/17 11:55:25 Using in-cluster config to connect to apiserver
	2025/12/17 11:55:25 Using secret token for csrf signing
	2025/12/17 11:55:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 11:55:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 11:55:25 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 11:55:25 Generating JWE encryption key
	2025/12/17 11:55:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 11:55:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 11:55:25 Initializing JWE encryption key from synchronized object
	2025/12/17 11:55:25 Creating in-cluster Sidecar client
	2025/12/17 11:55:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 11:55:25 Serving insecurely on HTTP port: 9090
	2025/12/17 11:55:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9089887de0862d0ff0a1ff8947345dc29948ae2624d480753c53732800ea3d73] <==
	I1217 11:55:17.282601       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 11:55:47.285489       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f1b1658f5dc89bd1e4d2fff38f6236eae96a88145d725189478a6dc19dfe2458] <==
	I1217 11:55:48.084790       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 11:55:48.084844       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 11:55:48.086971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:51.542476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:55.803618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:55:59.402341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:02.456408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:05.479441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:05.544225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:56:05.544357       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 11:56:05.544452       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7879bd5-c601-4a2a-a916-1dac80f7bd21", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-382022_4ed51294-3b89-4f2e-9367-d593e9316d14 became leader
	I1217 11:56:05.544516       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-382022_4ed51294-3b89-4f2e-9367-d593e9316d14!
	W1217 11:56:05.547681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 11:56:05.644902       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-382022_4ed51294-3b89-4f2e-9367-d593e9316d14!
	W1217 11:56:05.736757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:07.741051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:07.745977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:09.750035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:09.756152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:11.759503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:11.776379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:13.780436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:13.787952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:15.791160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:56:15.795791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022: exit status 2 (332.859469ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-382022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.59s)

                                                
                                    

Test pass (354/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.26
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.3/json-events 10.02
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.09
18 TestDownloadOnly/v1.34.3/DeleteAll 0.27
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.35.0-rc.1/json-events 10.93
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.13
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.25
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.16
29 TestDownloadOnlyKic 0.43
30 TestBinaryMirror 0.9
31 TestOffline 52.98
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 106.61
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 8.46
57 TestAddons/StoppedEnableDisable 16.83
58 TestCertOptions 27.78
59 TestCertExpiration 213.08
61 TestForceSystemdFlag 26.64
62 TestForceSystemdEnv 36.26
67 TestErrorSpam/setup 20.3
68 TestErrorSpam/start 0.71
69 TestErrorSpam/status 0.99
70 TestErrorSpam/pause 7.06
71 TestErrorSpam/unpause 6
72 TestErrorSpam/stop 2.61
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 41.34
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.63
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.76
84 TestFunctional/serial/CacheCmd/cache/add_local 2.08
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 77.31
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.35
95 TestFunctional/serial/LogsFileCmd 1.33
96 TestFunctional/serial/InvalidService 5.66
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 9.44
100 TestFunctional/parallel/DryRun 0.47
101 TestFunctional/parallel/InternationalLanguage 0.18
102 TestFunctional/parallel/StatusCmd 1.14
106 TestFunctional/parallel/ServiceCmdConnect 9.71
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 23.97
110 TestFunctional/parallel/SSHCmd 0.64
111 TestFunctional/parallel/CpCmd 1.87
112 TestFunctional/parallel/MySQL 23.61
113 TestFunctional/parallel/FileSync 0.33
114 TestFunctional/parallel/CertSync 1.86
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
122 TestFunctional/parallel/License 0.46
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
125 TestFunctional/parallel/Version/components 0.5
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
129 TestFunctional/parallel/ImageCommands/ImageBuild 5.03
130 TestFunctional/parallel/ImageCommands/Setup 1.89
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.6
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.32
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.91
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.42
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/parallel/ServiceCmd/DeployApp 8.16
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
154 TestFunctional/parallel/ProfileCmd/profile_list 0.42
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
156 TestFunctional/parallel/MountCmd/any-port 11.68
157 TestFunctional/parallel/ServiceCmd/List 1.75
158 TestFunctional/parallel/ServiceCmd/JSONOutput 1.79
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
160 TestFunctional/parallel/ServiceCmd/Format 0.55
161 TestFunctional/parallel/ServiceCmd/URL 0.58
162 TestFunctional/parallel/MountCmd/specific-port 2.01
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.93
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 37.99
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 6.57
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.61
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 2.03
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.62
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.14
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.13
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 63.13
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.33
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.35
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.99
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.52
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 7.97
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.39
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.17
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 1.13
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 9.74
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.2
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 23.95
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.6
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.93
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 25.15
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.95
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.61
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.43
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 8.2
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.09
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.62
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 1.15
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.23
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.23
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.27
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 9.4
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.88
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.31
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.22
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.2
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.21
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.98
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.51
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 8.02
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.45
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.43
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.36
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.53
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.64
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.42
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.37
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 9.24
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.38
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.4
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.39
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.41
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.72
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 2.02
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 112.24
266 TestMultiControlPlane/serial/DeployApp 6.48
267 TestMultiControlPlane/serial/PingHostFromPods 1.16
268 TestMultiControlPlane/serial/AddWorkerNode 29.19
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
271 TestMultiControlPlane/serial/CopyFile 18.35
272 TestMultiControlPlane/serial/StopSecondaryNode 18.8
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
274 TestMultiControlPlane/serial/RestartSecondaryNode 9.17
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 109.95
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.69
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
279 TestMultiControlPlane/serial/StopCluster 31.57
280 TestMultiControlPlane/serial/RestartCluster 54.84
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
282 TestMultiControlPlane/serial/AddSecondaryNode 46.59
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
288 TestJSONOutput/start/Command 40.32
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.08
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.25
313 TestKicCustomNetwork/create_custom_network 33.1
314 TestKicCustomNetwork/use_default_bridge_network 24.08
315 TestKicExistingNetwork 24.43
316 TestKicCustomSubnet 23.34
317 TestKicStaticIP 23.99
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 52.71
322 TestMountStart/serial/StartWithMountFirst 8.15
323 TestMountStart/serial/VerifyMountFirst 0.28
324 TestMountStart/serial/StartWithMountSecond 5.05
325 TestMountStart/serial/VerifyMountSecond 0.28
326 TestMountStart/serial/DeleteFirst 1.7
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 8.31
330 TestMountStart/serial/VerifyMountPostStop 0.28
333 TestMultiNode/serial/FreshStart2Nodes 71.43
334 TestMultiNode/serial/DeployApp2Nodes 4.45
335 TestMultiNode/serial/PingHostFrom2Pods 0.75
336 TestMultiNode/serial/AddNode 25.95
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.65
339 TestMultiNode/serial/CopyFile 9.95
340 TestMultiNode/serial/StopNode 2.27
341 TestMultiNode/serial/StartAfterStop 7.39
342 TestMultiNode/serial/RestartKeepsNodes 84.2
343 TestMultiNode/serial/DeleteNode 5.3
344 TestMultiNode/serial/StopMultiNode 28.66
345 TestMultiNode/serial/RestartMultiNode 52.64
346 TestMultiNode/serial/ValidateNameConflict 22.73
351 TestPreload 111.13
353 TestScheduledStopUnix 98.83
356 TestInsufficientStorage 8.93
357 TestRunningBinaryUpgrade 54.21
359 TestKubernetesUpgrade 319.05
360 TestMissingContainerUpgrade 92.2
363 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
364 TestPause/serial/Start 54.49
365 TestNoKubernetes/serial/StartWithK8s 33.52
366 TestNoKubernetes/serial/StartWithStopK8s 21.59
367 TestPause/serial/SecondStartNoReconfiguration 20.83
368 TestNoKubernetes/serial/Start 13.59
369 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
370 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
371 TestNoKubernetes/serial/ProfileList 1.94
372 TestNoKubernetes/serial/Stop 1.31
373 TestNoKubernetes/serial/StartNoArgs 8.1
375 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
376 TestStoppedBinaryUpgrade/Setup 3.44
377 TestStoppedBinaryUpgrade/Upgrade 288.89
392 TestNetworkPlugins/group/false 3.79
397 TestStartStop/group/old-k8s-version/serial/FirstStart 48.53
398 TestStartStop/group/old-k8s-version/serial/DeployApp 8.25
400 TestStartStop/group/old-k8s-version/serial/Stop 16.1
401 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
402 TestStartStop/group/old-k8s-version/serial/SecondStart 50.51
403 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
404 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
406 TestStartStop/group/no-preload/serial/FirstStart 49.6
407 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
410 TestStartStop/group/embed-certs/serial/FirstStart 48.96
412 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.95
413 TestStoppedBinaryUpgrade/MinikubeLogs 2.65
415 TestStartStop/group/newest-cni/serial/FirstStart 25.7
416 TestStartStop/group/no-preload/serial/DeployApp 8.25
418 TestStartStop/group/no-preload/serial/Stop 18.27
419 TestStartStop/group/embed-certs/serial/DeployApp 10.25
420 TestStartStop/group/newest-cni/serial/DeployApp 0
422 TestStartStop/group/newest-cni/serial/Stop 2.65
423 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
424 TestStartStop/group/newest-cni/serial/SecondStart 10.63
426 TestStartStop/group/embed-certs/serial/Stop 16.67
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
428 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
429 TestStartStop/group/no-preload/serial/SecondStart 52
430 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
431 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
432 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
435 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.98
436 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
437 TestStartStop/group/embed-certs/serial/SecondStart 45.98
438 TestNetworkPlugins/group/auto/Start 42.43
439 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
440 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.57
441 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
442 TestNetworkPlugins/group/auto/KubeletFlags 0.3
443 TestNetworkPlugins/group/auto/NetCatPod 9.18
444 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
445 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
446 TestNetworkPlugins/group/auto/DNS 0.12
447 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
448 TestNetworkPlugins/group/auto/Localhost 0.1
449 TestNetworkPlugins/group/auto/HairPin 0.09
450 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
452 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
454 TestNetworkPlugins/group/kindnet/Start 45.22
455 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
456 TestNetworkPlugins/group/calico/Start 58.19
457 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
458 TestNetworkPlugins/group/custom-flannel/Start 52.88
459 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
461 TestNetworkPlugins/group/enable-default-cni/Start 68.79
462 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
463 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
464 TestNetworkPlugins/group/kindnet/NetCatPod 8.21
465 TestNetworkPlugins/group/kindnet/DNS 0.14
466 TestNetworkPlugins/group/kindnet/Localhost 0.12
467 TestNetworkPlugins/group/kindnet/HairPin 0.13
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
470 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.21
471 TestNetworkPlugins/group/calico/KubeletFlags 0.32
472 TestNetworkPlugins/group/calico/NetCatPod 9.21
473 TestNetworkPlugins/group/custom-flannel/DNS 0.12
474 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
475 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
476 TestNetworkPlugins/group/calico/DNS 0.15
477 TestNetworkPlugins/group/calico/Localhost 0.11
478 TestNetworkPlugins/group/calico/HairPin 0.11
479 TestNetworkPlugins/group/flannel/Start 51.13
480 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
481 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
482 TestNetworkPlugins/group/bridge/Start 65.42
483 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
484 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
485 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
486 TestNetworkPlugins/group/flannel/ControllerPod 6.01
487 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
488 TestNetworkPlugins/group/flannel/NetCatPod 8.18
489 TestNetworkPlugins/group/flannel/DNS 0.11
490 TestNetworkPlugins/group/flannel/Localhost 0.09
491 TestNetworkPlugins/group/flannel/HairPin 0.09
492 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
493 TestNetworkPlugins/group/bridge/NetCatPod 8.19
494 TestNetworkPlugins/group/bridge/DNS 0.16
495 TestNetworkPlugins/group/bridge/Localhost 0.09
496 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (13.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-122874 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-122874 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.264189936s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 11:14:41.047909 1672941 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1217 11:14:41.047997 1672941 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-122874
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-122874: exit status 85 (100.27517ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-122874 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-122874 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:14:27
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:14:27.840754 1672953 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:14:27.841042 1672953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:14:27.841053 1672953 out.go:374] Setting ErrFile to fd 2...
	I1217 11:14:27.841058 1672953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:14:27.841345 1672953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	W1217 11:14:27.841508 1672953 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21808-1669348/.minikube/config/config.json: open /home/jenkins/minikube-integration/21808-1669348/.minikube/config/config.json: no such file or directory
	I1217 11:14:27.842070 1672953 out.go:368] Setting JSON to true
	I1217 11:14:27.843095 1672953 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":17813,"bootTime":1765952255,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:14:27.843164 1672953 start.go:143] virtualization: kvm guest
	I1217 11:14:27.847713 1672953 out.go:99] [download-only-122874] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:14:27.847957 1672953 notify.go:221] Checking for updates...
	W1217 11:14:27.847920 1672953 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 11:14:27.849520 1672953 out.go:171] MINIKUBE_LOCATION=21808
	I1217 11:14:27.850998 1672953 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:14:27.852419 1672953 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:14:27.856862 1672953 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:14:27.858348 1672953 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 11:14:27.860697 1672953 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 11:14:27.860995 1672953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:14:27.886681 1672953 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:14:27.886785 1672953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:14:27.946598 1672953 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-17 11:14:27.936640839 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:14:27.946737 1672953 docker.go:319] overlay module found
	I1217 11:14:27.948589 1672953 out.go:99] Using the docker driver based on user configuration
	I1217 11:14:27.948622 1672953 start.go:309] selected driver: docker
	I1217 11:14:27.948630 1672953 start.go:927] validating driver "docker" against <nil>
	I1217 11:14:27.948754 1672953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:14:28.004474 1672953 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-17 11:14:27.994930835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:14:28.004685 1672953 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:14:28.005197 1672953 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 11:14:28.005342 1672953 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 11:14:28.007033 1672953 out.go:171] Using Docker driver with root privileges
	I1217 11:14:28.008284 1672953 cni.go:84] Creating CNI manager for ""
	I1217 11:14:28.008349 1672953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:14:28.008360 1672953 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:14:28.008426 1672953 start.go:353] cluster config:
	{Name:download-only-122874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-122874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:14:28.009931 1672953 out.go:99] Starting "download-only-122874" primary control-plane node in "download-only-122874" cluster
	I1217 11:14:28.009967 1672953 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:14:28.011182 1672953 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:14:28.011217 1672953 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 11:14:28.011308 1672953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:14:28.029317 1672953 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 11:14:28.029551 1672953 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 11:14:28.029666 1672953 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 11:14:28.381315 1672953 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 11:14:28.381406 1672953 cache.go:65] Caching tarball of preloaded images
	I1217 11:14:28.381650 1672953 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 11:14:28.383775 1672953 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 11:14:28.383811 1672953 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 11:14:28.486327 1672953 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1217 11:14:28.486462 1672953 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 11:14:35.454476 1672953 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	
	
	* The control-plane node download-only-122874 host does not exist
	  To start a cluster, run: "minikube start -p download-only-122874"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-122874
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (10.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-921404 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-921404 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.017826832s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (10.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1217 11:14:51.583203 1672941 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 11:14:51.583255 1672941 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-921404
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-921404: exit status 85 (86.330646ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-122874 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-122874 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │ 17 Dec 25 11:14 UTC │
	│ delete  │ -p download-only-122874                                                                                                                                                   │ download-only-122874 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │ 17 Dec 25 11:14 UTC │
	│ start   │ -o=json --download-only -p download-only-921404 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-921404 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:14:41
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:14:41.622811 1673344 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:14:41.622917 1673344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:14:41.622923 1673344 out.go:374] Setting ErrFile to fd 2...
	I1217 11:14:41.622926 1673344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:14:41.623182 1673344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:14:41.623689 1673344 out.go:368] Setting JSON to true
	I1217 11:14:41.624719 1673344 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":17827,"bootTime":1765952255,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:14:41.624786 1673344 start.go:143] virtualization: kvm guest
	I1217 11:14:41.627197 1673344 out.go:99] [download-only-921404] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:14:41.627455 1673344 notify.go:221] Checking for updates...
	I1217 11:14:41.628852 1673344 out.go:171] MINIKUBE_LOCATION=21808
	I1217 11:14:41.630405 1673344 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:14:41.632037 1673344 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:14:41.633501 1673344 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:14:41.634805 1673344 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 11:14:41.637313 1673344 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 11:14:41.637658 1673344 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:14:41.661922 1673344 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:14:41.662045 1673344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:14:41.716874 1673344 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 11:14:41.706808907 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:14:41.716999 1673344 docker.go:319] overlay module found
	I1217 11:14:41.718943 1673344 out.go:99] Using the docker driver based on user configuration
	I1217 11:14:41.718990 1673344 start.go:309] selected driver: docker
	I1217 11:14:41.719000 1673344 start.go:927] validating driver "docker" against <nil>
	I1217 11:14:41.719095 1673344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:14:41.773672 1673344 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 11:14:41.764232556 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:14:41.773860 1673344 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:14:41.774388 1673344 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 11:14:41.774615 1673344 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 11:14:41.776581 1673344 out.go:171] Using Docker driver with root privileges
	I1217 11:14:41.778016 1673344 cni.go:84] Creating CNI manager for ""
	I1217 11:14:41.778083 1673344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:14:41.778091 1673344 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:14:41.778160 1673344 start.go:353] cluster config:
	{Name:download-only-921404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-921404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:14:41.779573 1673344 out.go:99] Starting "download-only-921404" primary control-plane node in "download-only-921404" cluster
	I1217 11:14:41.779592 1673344 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:14:41.780722 1673344 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:14:41.780761 1673344 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:14:41.780867 1673344 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:14:41.798360 1673344 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 11:14:41.798553 1673344 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 11:14:41.798577 1673344 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 11:14:41.798584 1673344 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 11:14:41.798597 1673344 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 11:14:42.163085 1673344 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:14:42.163127 1673344 cache.go:65] Caching tarball of preloaded images
	I1217 11:14:42.163318 1673344 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:14:42.165124 1673344 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1217 11:14:42.165147 1673344 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 11:14:42.269473 1673344 preload.go:295] Got checksum from GCS API "fdea575627999e8631bb8fa579d884c7"
	I1217 11:14:42.269544 1673344 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:fdea575627999e8631bb8fa579d884c7 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-921404 host does not exist
	  To start a cluster, run: "minikube start -p download-only-921404"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-921404
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (10.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-951167 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-951167 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.931675786s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (10.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1217 11:15:03.032043 1672941 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1217 11:15:03.032094 1672941 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-951167
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-951167: exit status 85 (126.845894ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-122874 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-122874 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │ 17 Dec 25 11:14 UTC │
	│ delete  │ -p download-only-122874                                                                                                                                                        │ download-only-122874 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │ 17 Dec 25 11:14 UTC │
	│ start   │ -o=json --download-only -p download-only-921404 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-921404 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │ 17 Dec 25 11:14 UTC │
	│ delete  │ -p download-only-921404                                                                                                                                                        │ download-only-921404 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │ 17 Dec 25 11:14 UTC │
	│ start   │ -o=json --download-only -p download-only-951167 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-951167 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:14:52
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:14:52.158312 1673712 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:14:52.158620 1673712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:14:52.158630 1673712 out.go:374] Setting ErrFile to fd 2...
	I1217 11:14:52.158634 1673712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:14:52.158887 1673712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:14:52.159374 1673712 out.go:368] Setting JSON to true
	I1217 11:14:52.160382 1673712 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":17837,"bootTime":1765952255,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:14:52.160475 1673712 start.go:143] virtualization: kvm guest
	I1217 11:14:52.162616 1673712 out.go:99] [download-only-951167] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:14:52.162846 1673712 notify.go:221] Checking for updates...
	I1217 11:14:52.164507 1673712 out.go:171] MINIKUBE_LOCATION=21808
	I1217 11:14:52.166133 1673712 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:14:52.167957 1673712 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:14:52.169443 1673712 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:14:52.171087 1673712 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 11:14:52.174244 1673712 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 11:14:52.174585 1673712 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:14:52.200849 1673712 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:14:52.200971 1673712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:14:52.263309 1673712 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 11:14:52.252649748 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:14:52.263433 1673712 docker.go:319] overlay module found
	I1217 11:14:52.265312 1673712 out.go:99] Using the docker driver based on user configuration
	I1217 11:14:52.265354 1673712 start.go:309] selected driver: docker
	I1217 11:14:52.265361 1673712 start.go:927] validating driver "docker" against <nil>
	I1217 11:14:52.265476 1673712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:14:52.327350 1673712 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 11:14:52.31669663 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:14:52.327621 1673712 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:14:52.328142 1673712 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 11:14:52.328282 1673712 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 11:14:52.330319 1673712 out.go:171] Using Docker driver with root privileges
	I1217 11:14:52.331983 1673712 cni.go:84] Creating CNI manager for ""
	I1217 11:14:52.332066 1673712 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 11:14:52.332079 1673712 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:14:52.332167 1673712 start.go:353] cluster config:
	{Name:download-only-951167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-951167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:14:52.333773 1673712 out.go:99] Starting "download-only-951167" primary control-plane node in "download-only-951167" cluster
	I1217 11:14:52.333807 1673712 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 11:14:52.335135 1673712 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:14:52.335183 1673712 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:14:52.335303 1673712 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:14:52.354171 1673712 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 11:14:52.354335 1673712 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 11:14:52.354373 1673712 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 11:14:52.354381 1673712 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 11:14:52.354391 1673712 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 11:14:52.703789 1673712 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 11:14:52.703827 1673712 cache.go:65] Caching tarball of preloaded images
	I1217 11:14:52.704035 1673712 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:14:52.706024 1673712 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1217 11:14:52.706057 1673712 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 11:14:52.807410 1673712 preload.go:295] Got checksum from GCS API "46a82b10f18f180acaede5af8ca381a9"
	I1217 11:14:52.807466 1673712 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:46a82b10f18f180acaede5af8ca381a9 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 11:15:01.459310 1673712 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 11:15:01.460550 1673712 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/download-only-951167/config.json ...
	I1217 11:15:01.460624 1673712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/download-only-951167/config.json: {Name:mk6093708332a2b2c60c88ef7b0b04c064e03650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:01.460882 1673712 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:15:01.461402 1673712 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubectl
	
	
	* The control-plane node download-only-951167 host does not exist
	  To start a cluster, run: "minikube start -p download-only-951167"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-951167
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-821854 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-821854" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-821854
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (0.9s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 11:15:04.499672 1672941 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-011622 --alsologtostderr --binary-mirror http://127.0.0.1:38139 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-011622" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-011622
--- PASS: TestBinaryMirror (0.90s)

                                                
                                    
x
+
TestOffline (52.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-990385 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-990385 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (47.852557697s)
helpers_test.go:176: Cleaning up "offline-crio-990385" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-990385
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-990385: (5.129681923s)
--- PASS: TestOffline (52.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-767877
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-767877: exit status 85 (72.832115ms)

                                                
                                                
-- stdout --
	* Profile "addons-767877" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-767877"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-767877
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-767877: exit status 85 (72.097703ms)

                                                
                                                
-- stdout --
	* Profile "addons-767877" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-767877"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (106.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-767877 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-767877 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m46.607346367s)
--- PASS: TestAddons/Setup (106.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-767877 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-767877 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-767877 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-767877 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b0860c9a-c9a2-4707-b609-2022a71cd161] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b0860c9a-c9a2-4707-b609-2022a71cd161] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00428279s
addons_test.go:696: (dbg) Run:  kubectl --context addons-767877 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-767877 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-767877 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.83s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-767877
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-767877: (16.523486678s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-767877
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-767877
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-767877
--- PASS: TestAddons/StoppedEnableDisable (16.83s)

                                                
                                    
x
+
TestCertOptions (27.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-714247 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1217 11:50:49.032327 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-714247 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.641219311s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-714247 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-714247 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-714247 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-714247" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-714247
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-714247: (2.469120442s)
--- PASS: TestCertOptions (27.78s)

                                                
                                    
x
+
TestCertExpiration (213.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-067996 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-067996 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.844490557s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-067996 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-067996 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.754519615s)
helpers_test.go:176: Cleaning up "cert-expiration-067996" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-067996
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-067996: (2.480222988s)
--- PASS: TestCertExpiration (213.08s)

                                                
                                    
x
+
TestForceSystemdFlag (26.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-881315 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-881315 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.899711716s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-881315 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-881315" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-881315
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-881315: (2.449234169s)
--- PASS: TestForceSystemdFlag (26.64s)

                                                
                                    
x
+
TestForceSystemdEnv (36.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-154933 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-154933 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.650206299s)
helpers_test.go:176: Cleaning up "force-systemd-env-154933" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-154933
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-154933: (2.61115241s)
--- PASS: TestForceSystemdEnv (36.26s)

                                                
                                    
x
+
TestErrorSpam/setup (20.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-439600 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-439600 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-439600 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-439600 --driver=docker  --container-runtime=crio: (20.304364974s)
--- PASS: TestErrorSpam/setup (20.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (7.06s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 pause: exit status 80 (2.298988046s)

                                                
                                                
-- stdout --
	* Pausing node nospam-439600 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:20:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 pause: exit status 80 (2.459864162s)

                                                
                                                
-- stdout --
	* Pausing node nospam-439600 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:20:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 pause: exit status 80 (2.299662096s)

                                                
                                                
-- stdout --
	* Pausing node nospam-439600 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:20:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.06s)

                                                
                                    
x
+
TestErrorSpam/unpause (6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 unpause: exit status 80 (1.795170046s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-439600 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:20:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 unpause: exit status 80 (1.868826561s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-439600 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:20:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 unpause: exit status 80 (2.330698175s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-439600 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:20:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.00s)

                                                
                                    
x
+
TestErrorSpam/stop (2.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 stop: (2.384920528s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439600 --log_dir /tmp/nospam-439600 stop
--- PASS: TestErrorSpam/stop (2.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/test/nested/copy/1672941/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-212713 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-212713 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.335219504s)
--- PASS: TestFunctional/serial/StartWithProxy (41.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 11:21:22.732568 1672941 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-212713 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-212713 --alsologtostderr -v=8: (6.633267034s)
functional_test.go:678: soft start took 6.634065493s for "functional-212713" cluster.
I1217 11:21:29.366321 1672941 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (6.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-212713 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-212713 /tmp/TestFunctionalserialCacheCmdcacheadd_local3757886674/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 cache add minikube-local-cache-test:functional-212713
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-212713 cache add minikube-local-cache-test:functional-212713: (1.717284138s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 cache delete minikube-local-cache-test:functional-212713
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-212713
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-212713 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.479436ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 kubectl -- --context functional-212713 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-212713 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (77.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-212713 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 11:21:53.261182 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:21:53.267628 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:21:53.279030 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:21:53.300476 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:21:53.341971 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:21:53.423456 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:21:53.585011 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:21:53.906739 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:21:54.548822 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:21:55.830215 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:21:58.393136 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:03.515138 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:13.756676 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:34.238132 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-212713 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m17.309285231s)
functional_test.go:776: restart took 1m17.309438014s for "functional-212713" cluster.
I1217 11:22:54.091063 1672941 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (77.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-212713 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-212713 logs: (1.348202444s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 logs --file /tmp/TestFunctionalserialLogsFileCmd874969935/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-212713 logs --file /tmp/TestFunctionalserialLogsFileCmd874969935/001/logs.txt: (1.328799147s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.66s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-212713 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-212713
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-212713: exit status 115 (359.354788ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32663 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-212713 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-212713 delete -f testdata/invalidsvc.yaml: (2.124267801s)
--- PASS: TestFunctional/serial/InvalidService (5.66s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-212713 config get cpus: exit status 14 (72.187841ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-212713 config get cpus: exit status 14 (72.959934ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-212713 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-212713 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1711316: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-212713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-212713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (209.114959ms)

                                                
                                                
-- stdout --
	* [functional-212713] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:23:26.651628 1710601 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:23:26.651723 1710601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:23:26.651728 1710601 out.go:374] Setting ErrFile to fd 2...
	I1217 11:23:26.651732 1710601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:23:26.651958 1710601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:23:26.652430 1710601 out.go:368] Setting JSON to false
	I1217 11:23:26.653741 1710601 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":18352,"bootTime":1765952255,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:23:26.653823 1710601 start.go:143] virtualization: kvm guest
	I1217 11:23:26.659663 1710601 out.go:179] * [functional-212713] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:23:26.661859 1710601 notify.go:221] Checking for updates...
	I1217 11:23:26.661924 1710601 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:23:26.664003 1710601 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:23:26.666106 1710601 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:23:26.667738 1710601 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:23:26.671798 1710601 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:23:26.673526 1710601 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:23:26.675613 1710601 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:23:26.676241 1710601 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:23:26.703721 1710601 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:23:26.703892 1710601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:23:26.770816 1710601 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 11:23:26.75839438 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:23:26.770971 1710601 docker.go:319] overlay module found
	I1217 11:23:26.773270 1710601 out.go:179] * Using the docker driver based on existing profile
	I1217 11:23:26.774700 1710601 start.go:309] selected driver: docker
	I1217 11:23:26.774726 1710601 start.go:927] validating driver "docker" against &{Name:functional-212713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-212713 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:23:26.774864 1710601 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:23:26.777477 1710601 out.go:203] 
	W1217 11:23:26.778838 1710601 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 11:23:26.780627 1710601 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-212713 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-212713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-212713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (184.322693ms)

                                                
                                                
-- stdout --
	* [functional-212713] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:23:27.111050 1710961 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:23:27.111159 1710961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:23:27.111170 1710961 out.go:374] Setting ErrFile to fd 2...
	I1217 11:23:27.111177 1710961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:23:27.111567 1710961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:23:27.112109 1710961 out.go:368] Setting JSON to false
	I1217 11:23:27.113235 1710961 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":18352,"bootTime":1765952255,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:23:27.113313 1710961 start.go:143] virtualization: kvm guest
	I1217 11:23:27.115370 1710961 out.go:179] * [functional-212713] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 11:23:27.117066 1710961 notify.go:221] Checking for updates...
	I1217 11:23:27.117094 1710961 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:23:27.118375 1710961 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:23:27.120110 1710961 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:23:27.121707 1710961 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:23:27.123245 1710961 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:23:27.124742 1710961 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:23:27.126698 1710961 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:23:27.127277 1710961 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:23:27.154965 1710961 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:23:27.155079 1710961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:23:27.217291 1710961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 11:23:27.206636912 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:23:27.217405 1710961 docker.go:319] overlay module found
	I1217 11:23:27.220635 1710961 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 11:23:27.222064 1710961 start.go:309] selected driver: docker
	I1217 11:23:27.222092 1710961 start.go:927] validating driver "docker" against &{Name:functional-212713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-212713 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:23:27.222227 1710961 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:23:27.224366 1710961 out.go:203] 
	W1217 11:23:27.225561 1710961 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 11:23:27.226794 1710961 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-212713 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-212713 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-z4rgj" [b783d0fd-2bda-41c4-9207-994187fbbab0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-z4rgj" [b783d0fd-2bda-41c4-9207-994187fbbab0] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004133352s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30970
functional_test.go:1680: http://192.168.49.2:30970: success! body:
Request served by hello-node-connect-7d85dfc575-z4rgj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30970
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [c3ea2531-0610-4260-a933-838ff2e2b52f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005346795s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-212713 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-212713 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-212713 get pvc myclaim -o=json
I1217 11:23:11.586494 1672941 retry.go:31] will retry after 1.341702544s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:1870a44c-9ca9-4280-9249-db38e98a6ba3 ResourceVersion:614 Generation:0 CreationTimestamp:2025-12-17 11:23:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0016f8140 VolumeMode:0xc0016f8150 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-212713 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-212713 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f7aecac9-16c0-49e6-9e3b-fcc88e8f6a28] Pending
helpers_test.go:353: "sp-pod" [f7aecac9-16c0-49e6-9e3b-fcc88e8f6a28] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [f7aecac9-16c0-49e6-9e3b-fcc88e8f6a28] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004672662s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-212713 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-212713 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-212713 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [de16ddec-f60b-4b08-8e39-93998411c0cf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [de16ddec-f60b-4b08-8e39-93998411c0cf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003924979s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-212713 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh -n functional-212713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 cp functional-212713:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1662838180/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh -n functional-212713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh -n functional-212713 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-212713 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-sh49s" [b365d368-2e96-48ed-b68e-a21856997b66] Pending
helpers_test.go:353: "mysql-6bcdcbc558-sh49s" [b365d368-2e96-48ed-b68e-a21856997b66] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-sh49s" [b365d368-2e96-48ed-b68e-a21856997b66] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.00398549s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-212713 exec mysql-6bcdcbc558-sh49s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-212713 exec mysql-6bcdcbc558-sh49s -- mysql -ppassword -e "show databases;": exit status 1 (99.824925ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:23:18.275415 1672941 retry.go:31] will retry after 1.41047143s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-212713 exec mysql-6bcdcbc558-sh49s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-212713 exec mysql-6bcdcbc558-sh49s -- mysql -ppassword -e "show databases;": exit status 1 (97.345876ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:23:19.784476 1672941 retry.go:31] will retry after 2.059543947s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-212713 exec mysql-6bcdcbc558-sh49s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-212713 exec mysql-6bcdcbc558-sh49s -- mysql -ppassword -e "show databases;": exit status 1 (103.453222ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:23:21.948559 1672941 retry.go:31] will retry after 1.403943229s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-212713 exec mysql-6bcdcbc558-sh49s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-212713 exec mysql-6bcdcbc558-sh49s -- mysql -ppassword -e "show databases;": exit status 1 (127.081807ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:23:23.480710 1672941 retry.go:31] will retry after 2.981699123s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-212713 exec mysql-6bcdcbc558-sh49s -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1672941/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo cat /etc/test/nested/copy/1672941/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1672941.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo cat /etc/ssl/certs/1672941.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1672941.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo cat /usr/share/ca-certificates/1672941.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/16729412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo cat /etc/ssl/certs/16729412.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/16729412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo cat /usr/share/ca-certificates/16729412.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-212713 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-212713 ssh "sudo systemctl is-active docker": exit status 1 (337.691436ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-212713 ssh "sudo systemctl is-active containerd": exit status 1 (295.643989ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-212713 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-212713
localhost/kicbase/echo-server:functional-212713
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-212713 image ls --format short --alsologtostderr:
I1217 11:23:31.410761 1712673 out.go:360] Setting OutFile to fd 1 ...
I1217 11:23:31.410936 1712673 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:23:31.410951 1712673 out.go:374] Setting ErrFile to fd 2...
I1217 11:23:31.410958 1712673 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:23:31.411199 1712673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
I1217 11:23:31.412099 1712673 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:23:31.412256 1712673 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:23:31.412985 1712673 cli_runner.go:164] Run: docker container inspect functional-212713 --format={{.State.Status}}
I1217 11:23:31.438139 1712673 ssh_runner.go:195] Run: systemctl --version
I1217 11:23:31.438201 1712673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-212713
I1217 11:23:31.462743 1712673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34311 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/functional-212713/id_rsa Username:docker}
I1217 11:23:31.569759 1712673 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-212713 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                    │ 3.6.5-0                               │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.3                               │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.3                               │ aa27095f56193 │ 89.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3                               │ aec12dadf56dd │ 53.9MB │
│ registry.k8s.io/pause                   │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest                                │ beae173ccac6a │ 1.46MB │
│ localhost/minikube-local-cache-test     │ functional-212713                     │ 9ff6e4dd4002b │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ a236f84b9d5d2 │ 55.2MB │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                                   │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.3                               │ 5826b25d990d7 │ 76MB   │
│ docker.io/kicbase/echo-server           │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-212713                     │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1                               │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-212713 image ls --format table --alsologtostderr:
I1217 11:23:36.224168 1713634 out.go:360] Setting OutFile to fd 1 ...
I1217 11:23:36.224477 1713634 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:23:36.224493 1713634 out.go:374] Setting ErrFile to fd 2...
I1217 11:23:36.224500 1713634 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:23:36.224861 1713634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
I1217 11:23:36.225699 1713634 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:23:36.225845 1713634 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:23:36.226509 1713634 cli_runner.go:164] Run: docker container inspect functional-212713 --format={{.State.Status}}
I1217 11:23:36.246427 1713634 ssh_runner.go:195] Run: systemctl --version
I1217 11:23:36.246494 1713634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-212713
I1217 11:23:36.268260 1713634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34311 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/functional-212713/id_rsa Username:docker}
I1217 11:23:36.363993 1713634 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-212713 image ls --format json --alsologtostderr:
[{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e857
1251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9ff6e4dd4002b80bd4fbfbbd6f20028a36c565abdac452d0c257563880678284","repoDigests":["localhost/minikube-local-cache-test@sha256:f6b61bff2265640ebec27bb313c692dede9591db2dbb9b902cc4156b06543db4"],"repoTags":["localhost/minikube-local-cache-test:functional-212713"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDi
gests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/ku
be-scheduler:v1.34.3"],"size":"53853013"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84
d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe6
16dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags"
:["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhos
t/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-212713"],"size":"4943877"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-212713 image ls --format json --alsologtostderr:
I1217 11:23:36.172435 1713579 out.go:360] Setting OutFile to fd 1 ...
I1217 11:23:36.172742 1713579 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:23:36.172754 1713579 out.go:374] Setting ErrFile to fd 2...
I1217 11:23:36.172761 1713579 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:23:36.172984 1713579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
I1217 11:23:36.173657 1713579 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:23:36.173782 1713579 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:23:36.174281 1713579 cli_runner.go:164] Run: docker container inspect functional-212713 --format={{.State.Status}}
I1217 11:23:36.195417 1713579 ssh_runner.go:195] Run: systemctl --version
I1217 11:23:36.195497 1713579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-212713
I1217 11:23:36.219131 1713579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34311 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/functional-212713/id_rsa Username:docker}
I1217 11:23:36.325623 1713579 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-212713 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 9ff6e4dd4002b80bd4fbfbbd6f20028a36c565abdac452d0c257563880678284
repoDigests:
- localhost/minikube-local-cache-test@sha256:f6b61bff2265640ebec27bb313c692dede9591db2dbb9b902cc4156b06543db4
repoTags:
- localhost/minikube-local-cache-test:functional-212713
size: "3330"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-212713
size: "4943877"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-212713 image ls --format yaml --alsologtostderr:
I1217 11:23:31.695448 1712727 out.go:360] Setting OutFile to fd 1 ...
I1217 11:23:31.695778 1712727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:23:31.695792 1712727 out.go:374] Setting ErrFile to fd 2...
I1217 11:23:31.695798 1712727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:23:31.696087 1712727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
I1217 11:23:31.697033 1712727 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:23:31.697180 1712727 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:23:31.697904 1712727 cli_runner.go:164] Run: docker container inspect functional-212713 --format={{.State.Status}}
I1217 11:23:31.721839 1712727 ssh_runner.go:195] Run: systemctl --version
I1217 11:23:31.721907 1712727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-212713
I1217 11:23:31.748630 1712727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34311 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/functional-212713/id_rsa Username:docker}
I1217 11:23:31.854605 1712727 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-212713 ssh pgrep buildkitd: exit status 1 (347.278956ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image build -t localhost/my-image:functional-212713 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-212713 image build -t localhost/my-image:functional-212713 testdata/build --alsologtostderr: (4.435597603s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-212713 image build -t localhost/my-image:functional-212713 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 39d667b2a64
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-212713
--> d7bbaa6767a
Successfully tagged localhost/my-image:functional-212713
d7bbaa6767ad4a86a6711d2e83b232399408a465cb26dd9c05b44cf7a11a0fd0
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-212713 image build -t localhost/my-image:functional-212713 testdata/build --alsologtostderr:
I1217 11:23:32.330420 1712900 out.go:360] Setting OutFile to fd 1 ...
I1217 11:23:32.330591 1712900 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:23:32.330602 1712900 out.go:374] Setting ErrFile to fd 2...
I1217 11:23:32.330613 1712900 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:23:32.330937 1712900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
I1217 11:23:32.331705 1712900 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:23:32.332762 1712900 config.go:182] Loaded profile config "functional-212713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:23:32.333515 1712900 cli_runner.go:164] Run: docker container inspect functional-212713 --format={{.State.Status}}
I1217 11:23:32.359154 1712900 ssh_runner.go:195] Run: systemctl --version
I1217 11:23:32.359214 1712900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-212713
I1217 11:23:32.383115 1712900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34311 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/functional-212713/id_rsa Username:docker}
I1217 11:23:32.489805 1712900 build_images.go:162] Building image from path: /tmp/build.1256523285.tar
I1217 11:23:32.489882 1712900 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 11:23:32.501586 1712900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1256523285.tar
I1217 11:23:32.506849 1712900 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1256523285.tar: stat -c "%s %y" /var/lib/minikube/build/build.1256523285.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1256523285.tar': No such file or directory
I1217 11:23:32.506881 1712900 ssh_runner.go:362] scp /tmp/build.1256523285.tar --> /var/lib/minikube/build/build.1256523285.tar (3072 bytes)
I1217 11:23:32.531762 1712900 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1256523285
I1217 11:23:32.542512 1712900 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1256523285 -xf /var/lib/minikube/build/build.1256523285.tar
I1217 11:23:32.551938 1712900 crio.go:315] Building image: /var/lib/minikube/build/build.1256523285
I1217 11:23:32.552048 1712900 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-212713 /var/lib/minikube/build/build.1256523285 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 11:23:36.664887 1712900 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-212713 /var/lib/minikube/build/build.1256523285 --cgroup-manager=cgroupfs: (4.112803208s)
I1217 11:23:36.664965 1712900 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1256523285
I1217 11:23:36.673748 1712900 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1256523285.tar
I1217 11:23:36.682105 1712900 build_images.go:218] Built localhost/my-image:functional-212713 from /tmp/build.1256523285.tar
I1217 11:23:36.682139 1712900 build_images.go:134] succeeded building to: functional-212713
I1217 11:23:36.682143 1712900 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.860112544s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-212713
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image load --daemon kicbase/echo-server:functional-212713 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-212713 image load --daemon kicbase/echo-server:functional-212713 --alsologtostderr: (1.271956534s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-212713 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-212713 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-212713 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-212713 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1707538: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-212713 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-212713 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [bab13240-154c-4001-a61b-de8c74871bed] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [bab13240-154c-4001-a61b-de8c74871bed] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.003456063s
I1217 11:23:21.969084 1672941 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-212713
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image load --daemon kicbase/echo-server:functional-212713 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image save kicbase/echo-server:functional-212713 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image rm kicbase/echo-server:functional-212713 --alsologtostderr
I1217 11:23:13.147055 1672941 detect.go:223] nested VM detected
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-212713 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.173855209s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-212713
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 image save --daemon kicbase/echo-server:functional-212713 --alsologtostderr
E1217 11:23:15.199873 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-212713
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-212713 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.153.162 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-212713 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-212713 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-212713 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-bkl67" [9c74f816-bf13-445e-bd12-62848456f7ae] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
I1217 11:23:23.019363 1672941 detect.go:223] nested VM detected
helpers_test.go:353: "hello-node-75c85bcc94-bkl67" [9c74f816-bf13-445e-bd12-62848456f7ae] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004186185s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "353.69468ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.645733ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "364.011165ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "76.912734ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdany-port3117494581/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765970607234636282" to /tmp/TestFunctionalparallelMountCmdany-port3117494581/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765970607234636282" to /tmp/TestFunctionalparallelMountCmdany-port3117494581/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765970607234636282" to /tmp/TestFunctionalparallelMountCmdany-port3117494581/001/test-1765970607234636282
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (316.913587ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:23:27.551964 1672941 retry.go:31] will retry after 296.228566ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 11:23 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 11:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 11:23 test-1765970607234636282
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh cat /mount-9p/test-1765970607234636282
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-212713 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [f2c6a4fe-f997-409e-bcab-70a181693b0a] Pending
helpers_test.go:353: "busybox-mount" [f2c6a4fe-f997-409e-bcab-70a181693b0a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [f2c6a4fe-f997-409e-bcab-70a181693b0a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [f2c6a4fe-f997-409e-bcab-70a181693b0a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003950749s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-212713 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdany-port3117494581/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-212713 service list: (1.745837543s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-212713 service list -o json: (1.787970624s)
functional_test.go:1504: Took "1.788096136s" to run "out/minikube-linux-amd64 -p functional-212713 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32518
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 service hello-node --url
2025/12/17 11:23:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32518
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdspecific-port3696400194/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.361881ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:23:39.213949 1672941 retry.go:31] will retry after 646.990131ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdspecific-port3696400194/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-212713 ssh "sudo umount -f /mount-9p": exit status 1 (279.768592ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-212713 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdspecific-port3696400194/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup593504032/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup593504032/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup593504032/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T" /mount1: exit status 1 (343.731909ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:23:41.264163 1672941 retry.go:31] will retry after 650.738437ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-212713 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-212713 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup593504032/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup593504032/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-212713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup593504032/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-212713
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-212713
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-212713
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21808-1669348/.minikube/files/etc/test/nested/copy/1672941/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (37.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-414245 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-414245 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (37.990225352s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (37.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1217 11:24:24.209478 1672941 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-414245 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-414245 --alsologtostderr -v=8: (6.574207311s)
functional_test.go:678: soft start took 6.574682706s for "functional-414245" cluster.
I1217 11:24:30.784142 1672941 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-414245 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC2555967502/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 cache add minikube-local-cache-test:functional-414245
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-414245 cache add minikube-local-cache-test:functional-414245: (1.71651338s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 cache delete minikube-local-cache-test:functional-414245
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-414245
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-414245 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.76988ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 cache reload
E1217 11:24:37.121727 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 kubectl -- --context functional-414245 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-414245 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (63.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-414245 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-414245 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m3.133845689s)
functional_test.go:776: restart took 1m3.134094927s for "functional-414245" cluster.
I1217 11:25:41.099899 1672941 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (63.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-414245 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-414245 logs: (1.325161264s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi672793291/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-414245 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi672793291/001/logs.txt: (1.350984766s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-414245 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-414245
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-414245: exit status 115 (355.927405ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31037 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-414245 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-414245 delete -f testdata/invalidsvc.yaml: (1.462040018s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-414245 config get cpus: exit status 14 (98.800388ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-414245 config get cpus: exit status 14 (88.13966ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (7.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-414245 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-414245 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1732368: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (7.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-414245 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-414245 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (167.053507ms)

                                                
                                                
-- stdout --
	* [functional-414245] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:26:03.995403 1731878 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:26:03.995723 1731878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:26:03.995736 1731878 out.go:374] Setting ErrFile to fd 2...
	I1217 11:26:03.995743 1731878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:26:03.995980 1731878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:26:03.996505 1731878 out.go:368] Setting JSON to false
	I1217 11:26:03.997669 1731878 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":18509,"bootTime":1765952255,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:26:03.997732 1731878 start.go:143] virtualization: kvm guest
	I1217 11:26:03.999819 1731878 out.go:179] * [functional-414245] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:26:04.001388 1731878 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:26:04.001381 1731878 notify.go:221] Checking for updates...
	I1217 11:26:04.002711 1731878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:26:04.003806 1731878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:26:04.004932 1731878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:26:04.006152 1731878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:26:04.007276 1731878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:26:04.008951 1731878 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:26:04.009511 1731878 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:26:04.033914 1731878 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:26:04.034082 1731878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:26:04.088746 1731878 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 11:26:04.079219789 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:26:04.088864 1731878 docker.go:319] overlay module found
	I1217 11:26:04.090952 1731878 out.go:179] * Using the docker driver based on existing profile
	I1217 11:26:04.092336 1731878 start.go:309] selected driver: docker
	I1217 11:26:04.092355 1731878 start.go:927] validating driver "docker" against &{Name:functional-414245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-414245 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:26:04.092495 1731878 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:26:04.094298 1731878 out.go:203] 
	W1217 11:26:04.095625 1731878 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 11:26:04.096850 1731878 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-414245 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-414245 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-414245 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (168.514108ms)

                                                
                                                
-- stdout --
	* [functional-414245] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:26:04.389291 1732098 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:26:04.389394 1732098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:26:04.389406 1732098 out.go:374] Setting ErrFile to fd 2...
	I1217 11:26:04.389413 1732098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:26:04.389779 1732098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:26:04.390305 1732098 out.go:368] Setting JSON to false
	I1217 11:26:04.391381 1732098 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":18509,"bootTime":1765952255,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:26:04.391444 1732098 start.go:143] virtualization: kvm guest
	I1217 11:26:04.393230 1732098 out.go:179] * [functional-414245] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 11:26:04.394844 1732098 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:26:04.394863 1732098 notify.go:221] Checking for updates...
	I1217 11:26:04.397082 1732098 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:26:04.398314 1732098 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:26:04.399415 1732098 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:26:04.400516 1732098 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:26:04.401727 1732098 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:26:04.403370 1732098 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:26:04.404170 1732098 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:26:04.429382 1732098 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:26:04.429489 1732098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:26:04.486090 1732098 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 11:26:04.475624536 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:26:04.486210 1732098 docker.go:319] overlay module found
	I1217 11:26:04.487874 1732098 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 11:26:04.488970 1732098 start.go:309] selected driver: docker
	I1217 11:26:04.488985 1732098 start.go:927] validating driver "docker" against &{Name:functional-414245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-414245 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:26:04.489081 1732098 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:26:04.490958 1732098 out.go:203] 
	W1217 11:26:04.492187 1732098 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 11:26:04.493488 1732098 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (9.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-414245 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-414245 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-7q2wc" [5e437c69-bf3a-4a3c-8bf5-5ed00ad182b5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-7q2wc" [5e437c69-bf3a-4a3c-8bf5-5ed00ad182b5] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004624439s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31424
functional_test.go:1680: http://192.168.49.2:31424: success! body:
Request served by hello-node-connect-9f67c86d4-7q2wc

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31424
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (9.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (23.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [f404e8eb-db31-4436-8c8c-9593de902ee4] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004063398s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-414245 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-414245 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-414245 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-414245 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4ebb5d7e-416b-48b4-bd55-c604c28167b4] Pending
helpers_test.go:353: "sp-pod" [4ebb5d7e-416b-48b4-bd55-c604c28167b4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [4ebb5d7e-416b-48b4-bd55-c604c28167b4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003631024s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-414245 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-414245 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-414245 delete -f testdata/storage-provisioner/pod.yaml: (1.128414979s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-414245 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [5416058d-97b7-4950-b18e-2991e8d22230] Pending
helpers_test.go:353: "sp-pod" [5416058d-97b7-4950-b18e-2991e8d22230] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [5416058d-97b7-4950-b18e-2991e8d22230] Running
2025/12/17 11:26:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004512361s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-414245 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (23.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh -n functional-414245 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 cp functional-414245:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3002597553/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh -n functional-414245 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh -n functional-414245 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (25.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-414245 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-9x9v8" [514beff9-c90a-4af5-9816-d9922c23d74a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-9x9v8" [514beff9-c90a-4af5-9816-d9922c23d74a] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 20.004331523s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-414245 exec mysql-7d7b65bc95-9x9v8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-414245 exec mysql-7d7b65bc95-9x9v8 -- mysql -ppassword -e "show databases;": exit status 1 (108.492254ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:26:26.915207 1672941 retry.go:31] will retry after 1.338364919s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-414245 exec mysql-7d7b65bc95-9x9v8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-414245 exec mysql-7d7b65bc95-9x9v8 -- mysql -ppassword -e "show databases;": exit status 1 (139.606093ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:26:28.394473 1672941 retry.go:31] will retry after 1.703038945s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-414245 exec mysql-7d7b65bc95-9x9v8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-414245 exec mysql-7d7b65bc95-9x9v8 -- mysql -ppassword -e "show databases;": exit status 1 (114.19543ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:26:30.212679 1672941 retry.go:31] will retry after 1.481110931s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-414245 exec mysql-7d7b65bc95-9x9v8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (25.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1672941/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo cat /etc/test/nested/copy/1672941/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1672941.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo cat /etc/ssl/certs/1672941.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1672941.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo cat /usr/share/ca-certificates/1672941.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/16729412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo cat /etc/ssl/certs/16729412.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/16729412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo cat /usr/share/ca-certificates/16729412.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-414245 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-414245 ssh "sudo systemctl is-active docker": exit status 1 (309.345894ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-414245 ssh "sudo systemctl is-active containerd": exit status 1 (303.866661ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-414245 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-414245 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-8ztt2" [fbe49626-d177-4820-8a26-10b8f95b548a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-8ztt2" [fbe49626-d177-4820-8a26-10b8f95b548a] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004436223s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-414245 image ls --format short --alsologtostderr: (1.146505632s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-414245 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-414245
localhost/kicbase/echo-server:functional-414245
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-414245 image ls --format short --alsologtostderr:
I1217 11:26:10.337090 1733142 out.go:360] Setting OutFile to fd 1 ...
I1217 11:26:10.337408 1733142 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:26:10.337420 1733142 out.go:374] Setting ErrFile to fd 2...
I1217 11:26:10.337425 1733142 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:26:10.337746 1733142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
I1217 11:26:10.338569 1733142 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:26:10.338702 1733142 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:26:10.339328 1733142 cli_runner.go:164] Run: docker container inspect functional-414245 --format={{.State.Status}}
I1217 11:26:10.364651 1733142 ssh_runner.go:195] Run: systemctl --version
I1217 11:26:10.364718 1733142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-414245
I1217 11:26:10.388639 1733142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34316 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/functional-414245/id_rsa Username:docker}
I1217 11:26:10.494715 1733142 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-414245 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest                                │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-414245                     │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-414245                     │ 9ff6e4dd4002b │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1                          │ 58865405a13bc │ 90.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1                          │ 5032a56602e1b │ 76.9MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1                          │ af0321f3a4f38 │ 72MB   │
│ registry.k8s.io/pause                   │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1                          │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/pause                   │ 3.3                                   │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-414245 image ls --format table --alsologtostderr:
I1217 11:26:12.759408 1733590 out.go:360] Setting OutFile to fd 1 ...
I1217 11:26:12.759563 1733590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:26:12.759575 1733590 out.go:374] Setting ErrFile to fd 2...
I1217 11:26:12.759582 1733590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:26:12.759883 1733590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
I1217 11:26:12.760519 1733590 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:26:12.760641 1733590 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:26:12.761110 1733590 cli_runner.go:164] Run: docker container inspect functional-414245 --format={{.State.Status}}
I1217 11:26:12.780789 1733590 ssh_runner.go:195] Run: systemctl --version
I1217 11:26:12.780841 1733590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-414245
I1217 11:26:12.799157 1733590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34316 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/functional-414245/id_rsa Username:docker}
I1217 11:26:12.893621 1733590 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-414245 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d
4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9ff6e4dd4002b80bd4fbfbbd6f20028a36c565abdac452d0c257563880678284","repoDigests":["localhost/minikube-local-cache-test@sha256:f6b61bff2265640ebec27bb313c692dede9591db2dbb9b902cc4156b06543db4"],"repoTags":["localhost/minikube-local-cache-test:functional-414245"],"size":"3330"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba
217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d1
0a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-414245"],"size":"4945146"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","regist
ry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernete
sui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00
ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-414245 image ls --format json --alsologtostderr:
I1217 11:26:12.524989 1733539 out.go:360] Setting OutFile to fd 1 ...
I1217 11:26:12.525272 1733539 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:26:12.525284 1733539 out.go:374] Setting ErrFile to fd 2...
I1217 11:26:12.525290 1733539 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:26:12.525564 1733539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
I1217 11:26:12.526153 1733539 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:26:12.526275 1733539 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:26:12.526830 1733539 cli_runner.go:164] Run: docker container inspect functional-414245 --format={{.State.Status}}
I1217 11:26:12.546705 1733539 ssh_runner.go:195] Run: systemctl --version
I1217 11:26:12.546759 1733539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-414245
I1217 11:26:12.566492 1733539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34316 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/functional-414245/id_rsa Username:docker}
I1217 11:26:12.660903 1733539 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-414245 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-414245
size: "4945146"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 9ff6e4dd4002b80bd4fbfbbd6f20028a36c565abdac452d0c257563880678284
repoDigests:
- localhost/minikube-local-cache-test@sha256:f6b61bff2265640ebec27bb313c692dede9591db2dbb9b902cc4156b06543db4
repoTags:
- localhost/minikube-local-cache-test:functional-414245
size: "3330"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-414245 image ls --format yaml --alsologtostderr:
I1217 11:26:11.486936 1733239 out.go:360] Setting OutFile to fd 1 ...
I1217 11:26:11.487057 1733239 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:26:11.487068 1733239 out.go:374] Setting ErrFile to fd 2...
I1217 11:26:11.487074 1733239 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:26:11.487348 1733239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
I1217 11:26:11.488188 1733239 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:26:11.488332 1733239 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:26:11.489137 1733239 cli_runner.go:164] Run: docker container inspect functional-414245 --format={{.State.Status}}
I1217 11:26:11.514962 1733239 ssh_runner.go:195] Run: systemctl --version
I1217 11:26:11.515026 1733239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-414245
I1217 11:26:11.538714 1733239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34316 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/functional-414245/id_rsa Username:docker}
I1217 11:26:11.634422 1733239 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (9.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-414245 ssh pgrep buildkitd: exit status 1 (281.960078ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image build -t localhost/my-image:functional-414245 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-414245 image build -t localhost/my-image:functional-414245 testdata/build --alsologtostderr: (8.86996415s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-414245 image build -t localhost/my-image:functional-414245 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e1e29185d1d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-414245
--> ff872edd01c
Successfully tagged localhost/my-image:functional-414245
ff872edd01ca9dfe1dfd01271a9089df30a3a20907adf666ff3c254052b1c8ce
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-414245 image build -t localhost/my-image:functional-414245 testdata/build --alsologtostderr:
I1217 11:26:12.021605 1733453 out.go:360] Setting OutFile to fd 1 ...
I1217 11:26:12.021896 1733453 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:26:12.021907 1733453 out.go:374] Setting ErrFile to fd 2...
I1217 11:26:12.021913 1733453 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:26:12.022157 1733453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
I1217 11:26:12.022855 1733453 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:26:12.023730 1733453 config.go:182] Loaded profile config "functional-414245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:26:12.024266 1733453 cli_runner.go:164] Run: docker container inspect functional-414245 --format={{.State.Status}}
I1217 11:26:12.043879 1733453 ssh_runner.go:195] Run: systemctl --version
I1217 11:26:12.043949 1733453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-414245
I1217 11:26:12.063373 1733453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34316 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/functional-414245/id_rsa Username:docker}
I1217 11:26:12.158444 1733453 build_images.go:162] Building image from path: /tmp/build.2412091581.tar
I1217 11:26:12.158514 1733453 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 11:26:12.168024 1733453 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2412091581.tar
I1217 11:26:12.172076 1733453 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2412091581.tar: stat -c "%s %y" /var/lib/minikube/build/build.2412091581.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2412091581.tar': No such file or directory
I1217 11:26:12.172106 1733453 ssh_runner.go:362] scp /tmp/build.2412091581.tar --> /var/lib/minikube/build/build.2412091581.tar (3072 bytes)
I1217 11:26:12.192677 1733453 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2412091581
I1217 11:26:12.201661 1733453 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2412091581 -xf /var/lib/minikube/build/build.2412091581.tar
I1217 11:26:12.210733 1733453 crio.go:315] Building image: /var/lib/minikube/build/build.2412091581
I1217 11:26:12.210805 1733453 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-414245 /var/lib/minikube/build/build.2412091581 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 11:26:20.798943 1733453 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-414245 /var/lib/minikube/build/build.2412091581 --cgroup-manager=cgroupfs: (8.588109163s)
I1217 11:26:20.799011 1733453 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2412091581
I1217 11:26:20.809237 1733453 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2412091581.tar
I1217 11:26:20.817801 1733453 build_images.go:218] Built localhost/my-image:functional-414245 from /tmp/build.2412091581.tar
I1217 11:26:20.817835 1733453 build_images.go:134] succeeded building to: functional-414245
I1217 11:26:20.817840 1733453 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (9.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-414245
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image load --daemon kicbase/echo-server:functional-414245 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-414245 image load --daemon kicbase/echo-server:functional-414245 --alsologtostderr: (1.033309812s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 update-context --alsologtostderr -v=2
I1217 11:26:10.111988 1672941 detect.go:223] nested VM detected
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image load --daemon kicbase/echo-server:functional-414245 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (8.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1655633105/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765970751897884970" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1655633105/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765970751897884970" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1655633105/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765970751897884970" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1655633105/001/test-1765970751897884970
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (339.07804ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:25:52.237313 1672941 retry.go:31] will retry after 434.196903ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 11:25 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 11:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 11:25 test-1765970751897884970
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh cat /mount-9p/test-1765970751897884970
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-414245 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [c3f517ee-854d-4048-844a-53b4c4f43d15] Pending
helpers_test.go:353: "busybox-mount" [c3f517ee-854d-4048-844a-53b4c4f43d15] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [c3f517ee-854d-4048-844a-53b4c4f43d15] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [c3f517ee-854d-4048-844a-53b4c4f43d15] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003581883s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-414245 logs busybox-mount
I1217 11:25:58.710874 1672941 detect.go:223] nested VM detected
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1655633105/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (8.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "367.229732ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "80.139731ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-414245
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image load --daemon kicbase/echo-server:functional-414245 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "364.914212ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.840658ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image save kicbase/echo-server:functional-414245 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image rm kicbase/echo-server:functional-414245 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-414245
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 image save --daemon kicbase/echo-server:functional-414245 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-414245
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-414245 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-414245 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-414245 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-414245 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1728983: os: process already finished
helpers_test.go:520: unable to terminate pid 1728766: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-414245 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-414245 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [52d600d8-538d-4e97-acdf-a1e12b8d6047] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [52d600d8-538d-4e97-acdf-a1e12b8d6047] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003875703s
I1217 11:26:06.471607 1672941 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 service list -o json
functional_test.go:1504: Took "376.488636ms" to run "out/minikube-linux-amd64 -p functional-414245 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32036
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32036
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2920811523/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (360.131105ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:26:00.275709 1672941 retry.go:31] will retry after 263.883493ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2920811523/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-414245 ssh "sudo umount -f /mount-9p": exit status 1 (297.937194ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-414245 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2920811523/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2813922101/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2813922101/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2813922101/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T" /mount1: exit status 1 (369.644381ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:26:02.006126 1672941 retry.go:31] will retry after 732.230152ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-414245 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-414245 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2813922101/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2813922101/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-414245 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2813922101/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-414245 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.134.224 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-414245 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-414245
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-414245
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-414245
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (112.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 11:26:53.252755 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:27:20.964972 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:03.172096 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:03.178515 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:03.189904 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:03.211301 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:03.252716 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:03.334200 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:03.495815 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:03.817565 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:04.459593 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:05.741045 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:08.303016 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:13.424740 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:28:23.666478 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m51.499226418s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (112.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 kubectl -- rollout status deployment/busybox: (4.46618797s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-9qbqd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-grvb8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-t8czn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-9qbqd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-grvb8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-t8czn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-9qbqd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-grvb8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-t8czn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-9qbqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-9qbqd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-grvb8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-grvb8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-t8czn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 kubectl -- exec busybox-7b57f96db7-t8czn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 node add --alsologtostderr -v 5
E1217 11:28:44.148484 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 node add --alsologtostderr -v 5: (28.260969962s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (29.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-770145 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp testdata/cp-test.txt ha-770145:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1661493676/001/cp-test_ha-770145.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145:/home/docker/cp-test.txt ha-770145-m02:/home/docker/cp-test_ha-770145_ha-770145-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m02 "sudo cat /home/docker/cp-test_ha-770145_ha-770145-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145:/home/docker/cp-test.txt ha-770145-m03:/home/docker/cp-test_ha-770145_ha-770145-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m03 "sudo cat /home/docker/cp-test_ha-770145_ha-770145-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145:/home/docker/cp-test.txt ha-770145-m04:/home/docker/cp-test_ha-770145_ha-770145-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m04 "sudo cat /home/docker/cp-test_ha-770145_ha-770145-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp testdata/cp-test.txt ha-770145-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1661493676/001/cp-test_ha-770145-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m02:/home/docker/cp-test.txt ha-770145:/home/docker/cp-test_ha-770145-m02_ha-770145.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145 "sudo cat /home/docker/cp-test_ha-770145-m02_ha-770145.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m02:/home/docker/cp-test.txt ha-770145-m03:/home/docker/cp-test_ha-770145-m02_ha-770145-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m03 "sudo cat /home/docker/cp-test_ha-770145-m02_ha-770145-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m02:/home/docker/cp-test.txt ha-770145-m04:/home/docker/cp-test_ha-770145-m02_ha-770145-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m04 "sudo cat /home/docker/cp-test_ha-770145-m02_ha-770145-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp testdata/cp-test.txt ha-770145-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1661493676/001/cp-test_ha-770145-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m03:/home/docker/cp-test.txt ha-770145:/home/docker/cp-test_ha-770145-m03_ha-770145.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145 "sudo cat /home/docker/cp-test_ha-770145-m03_ha-770145.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m03:/home/docker/cp-test.txt ha-770145-m02:/home/docker/cp-test_ha-770145-m03_ha-770145-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m02 "sudo cat /home/docker/cp-test_ha-770145-m03_ha-770145-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m03:/home/docker/cp-test.txt ha-770145-m04:/home/docker/cp-test_ha-770145-m03_ha-770145-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m04 "sudo cat /home/docker/cp-test_ha-770145-m03_ha-770145-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp testdata/cp-test.txt ha-770145-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1661493676/001/cp-test_ha-770145-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m04:/home/docker/cp-test.txt ha-770145:/home/docker/cp-test_ha-770145-m04_ha-770145.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145 "sudo cat /home/docker/cp-test_ha-770145-m04_ha-770145.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m04:/home/docker/cp-test.txt ha-770145-m02:/home/docker/cp-test_ha-770145-m04_ha-770145-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m02 "sudo cat /home/docker/cp-test_ha-770145-m04_ha-770145-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 cp ha-770145-m04:/home/docker/cp-test.txt ha-770145-m03:/home/docker/cp-test_ha-770145-m04_ha-770145-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 ssh -n ha-770145-m03 "sudo cat /home/docker/cp-test_ha-770145-m04_ha-770145-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (18.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 node stop m02 --alsologtostderr -v 5
E1217 11:29:25.111715 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 node stop m02 --alsologtostderr -v 5: (18.061445098s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-770145 status --alsologtostderr -v 5: exit status 7 (735.00979ms)

                                                
                                                
-- stdout --
	ha-770145
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-770145-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770145-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-770145-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:29:42.227951 1754325 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:29:42.228055 1754325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:29:42.228059 1754325 out.go:374] Setting ErrFile to fd 2...
	I1217 11:29:42.228063 1754325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:29:42.228271 1754325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:29:42.228458 1754325 out.go:368] Setting JSON to false
	I1217 11:29:42.228492 1754325 mustload.go:66] Loading cluster: ha-770145
	I1217 11:29:42.228614 1754325 notify.go:221] Checking for updates...
	I1217 11:29:42.228860 1754325 config.go:182] Loaded profile config "ha-770145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:29:42.228875 1754325 status.go:174] checking status of ha-770145 ...
	I1217 11:29:42.229281 1754325 cli_runner.go:164] Run: docker container inspect ha-770145 --format={{.State.Status}}
	I1217 11:29:42.248841 1754325 status.go:371] ha-770145 host status = "Running" (err=<nil>)
	I1217 11:29:42.248877 1754325 host.go:66] Checking if "ha-770145" exists ...
	I1217 11:29:42.249202 1754325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770145
	I1217 11:29:42.270800 1754325 host.go:66] Checking if "ha-770145" exists ...
	I1217 11:29:42.271179 1754325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:29:42.271226 1754325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770145
	I1217 11:29:42.290264 1754325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/ha-770145/id_rsa Username:docker}
	I1217 11:29:42.384664 1754325 ssh_runner.go:195] Run: systemctl --version
	I1217 11:29:42.391726 1754325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:29:42.406021 1754325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:29:42.464393 1754325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 11:29:42.453730045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:29:42.465172 1754325 kubeconfig.go:125] found "ha-770145" server: "https://192.168.49.254:8443"
	I1217 11:29:42.465215 1754325 api_server.go:166] Checking apiserver status ...
	I1217 11:29:42.465261 1754325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:42.478497 1754325 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1298/cgroup
	W1217 11:29:42.488112 1754325 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1298/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:29:42.488165 1754325 ssh_runner.go:195] Run: ls
	I1217 11:29:42.492507 1754325 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 11:29:42.497023 1754325 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 11:29:42.497057 1754325 status.go:463] ha-770145 apiserver status = Running (err=<nil>)
	I1217 11:29:42.497070 1754325 status.go:176] ha-770145 status: &{Name:ha-770145 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:29:42.497094 1754325 status.go:174] checking status of ha-770145-m02 ...
	I1217 11:29:42.497430 1754325 cli_runner.go:164] Run: docker container inspect ha-770145-m02 --format={{.State.Status}}
	I1217 11:29:42.517723 1754325 status.go:371] ha-770145-m02 host status = "Stopped" (err=<nil>)
	I1217 11:29:42.517749 1754325 status.go:384] host is not running, skipping remaining checks
	I1217 11:29:42.517757 1754325 status.go:176] ha-770145-m02 status: &{Name:ha-770145-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:29:42.517781 1754325 status.go:174] checking status of ha-770145-m03 ...
	I1217 11:29:42.518115 1754325 cli_runner.go:164] Run: docker container inspect ha-770145-m03 --format={{.State.Status}}
	I1217 11:29:42.538021 1754325 status.go:371] ha-770145-m03 host status = "Running" (err=<nil>)
	I1217 11:29:42.538055 1754325 host.go:66] Checking if "ha-770145-m03" exists ...
	I1217 11:29:42.538428 1754325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770145-m03
	I1217 11:29:42.557922 1754325 host.go:66] Checking if "ha-770145-m03" exists ...
	I1217 11:29:42.558256 1754325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:29:42.558299 1754325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770145-m03
	I1217 11:29:42.578129 1754325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34331 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/ha-770145-m03/id_rsa Username:docker}
	I1217 11:29:42.671446 1754325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:29:42.685257 1754325 kubeconfig.go:125] found "ha-770145" server: "https://192.168.49.254:8443"
	I1217 11:29:42.685290 1754325 api_server.go:166] Checking apiserver status ...
	I1217 11:29:42.685331 1754325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:42.696832 1754325 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup
	W1217 11:29:42.706283 1754325 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:29:42.706376 1754325 ssh_runner.go:195] Run: ls
	I1217 11:29:42.710955 1754325 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 11:29:42.715401 1754325 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 11:29:42.715430 1754325 status.go:463] ha-770145-m03 apiserver status = Running (err=<nil>)
	I1217 11:29:42.715439 1754325 status.go:176] ha-770145-m03 status: &{Name:ha-770145-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:29:42.715456 1754325 status.go:174] checking status of ha-770145-m04 ...
	I1217 11:29:42.715736 1754325 cli_runner.go:164] Run: docker container inspect ha-770145-m04 --format={{.State.Status}}
	I1217 11:29:42.735301 1754325 status.go:371] ha-770145-m04 host status = "Running" (err=<nil>)
	I1217 11:29:42.735333 1754325 host.go:66] Checking if "ha-770145-m04" exists ...
	I1217 11:29:42.735715 1754325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770145-m04
	I1217 11:29:42.755058 1754325 host.go:66] Checking if "ha-770145-m04" exists ...
	I1217 11:29:42.755376 1754325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:29:42.755457 1754325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770145-m04
	I1217 11:29:42.774223 1754325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34336 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/ha-770145-m04/id_rsa Username:docker}
	I1217 11:29:42.867095 1754325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:29:42.893400 1754325 status.go:176] ha-770145-m04 status: &{Name:ha-770145-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (18.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 node start m02 --alsologtostderr -v 5: (8.196800258s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (109.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 stop --alsologtostderr -v 5: (40.326672985s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 start --wait true --alsologtostderr -v 5
E1217 11:30:47.033357 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:49.032749 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:49.039208 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:49.050673 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:49.072168 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:49.113448 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:49.194984 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:49.356600 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:49.678412 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:50.320598 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:51.602251 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:54.164209 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:59.286173 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:31:09.527503 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:31:30.009213 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 start --wait true --alsologtostderr -v 5: (1m9.479843453s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (109.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 node delete m03 --alsologtostderr -v 5
E1217 11:31:53.252553 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 node delete m03 --alsologtostderr -v 5: (9.81443425s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (31.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 stop --alsologtostderr -v 5
E1217 11:32:10.971767 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 stop --alsologtostderr -v 5: (31.448176514s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-770145 status --alsologtostderr -v 5: exit status 7 (123.503043ms)

                                                
                                                
-- stdout --
	ha-770145
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770145-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770145-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:32:26.605329 1768833 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:32:26.605476 1768833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:32:26.605486 1768833 out.go:374] Setting ErrFile to fd 2...
	I1217 11:32:26.605490 1768833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:32:26.605741 1768833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:32:26.605912 1768833 out.go:368] Setting JSON to false
	I1217 11:32:26.605947 1768833 mustload.go:66] Loading cluster: ha-770145
	I1217 11:32:26.606078 1768833 notify.go:221] Checking for updates...
	I1217 11:32:26.606306 1768833 config.go:182] Loaded profile config "ha-770145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:32:26.606322 1768833 status.go:174] checking status of ha-770145 ...
	I1217 11:32:26.606781 1768833 cli_runner.go:164] Run: docker container inspect ha-770145 --format={{.State.Status}}
	I1217 11:32:26.626179 1768833 status.go:371] ha-770145 host status = "Stopped" (err=<nil>)
	I1217 11:32:26.626223 1768833 status.go:384] host is not running, skipping remaining checks
	I1217 11:32:26.626239 1768833 status.go:176] ha-770145 status: &{Name:ha-770145 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:32:26.626270 1768833 status.go:174] checking status of ha-770145-m02 ...
	I1217 11:32:26.626592 1768833 cli_runner.go:164] Run: docker container inspect ha-770145-m02 --format={{.State.Status}}
	I1217 11:32:26.647024 1768833 status.go:371] ha-770145-m02 host status = "Stopped" (err=<nil>)
	I1217 11:32:26.647050 1768833 status.go:384] host is not running, skipping remaining checks
	I1217 11:32:26.647059 1768833 status.go:176] ha-770145-m02 status: &{Name:ha-770145-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:32:26.647085 1768833 status.go:174] checking status of ha-770145-m04 ...
	I1217 11:32:26.647355 1768833 cli_runner.go:164] Run: docker container inspect ha-770145-m04 --format={{.State.Status}}
	I1217 11:32:26.664845 1768833 status.go:371] ha-770145-m04 host status = "Stopped" (err=<nil>)
	I1217 11:32:26.664873 1768833 status.go:384] host is not running, skipping remaining checks
	I1217 11:32:26.664881 1768833 status.go:176] ha-770145-m04 status: &{Name:ha-770145-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (31.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 11:33:03.172482 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.985497089s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 node add --control-plane --alsologtostderr -v 5
E1217 11:33:30.875932 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:33:32.893907 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-770145 node add --control-plane --alsologtostderr -v 5: (45.684163306s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-770145 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-843742 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-843742 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.3147105s)
--- PASS: TestJSONOutput/start/Command (40.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-843742 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-843742 --output=json --user=testUser: (6.082702127s)
--- PASS: TestJSONOutput/stop/Command (6.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-639271 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-639271 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.251986ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fcdfe1e5-8825-432f-8cd2-3b1790e748c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-639271] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eff6471f-03b7-4297-a88a-194aa8dae190","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21808"}}
	{"specversion":"1.0","id":"d0a61c6f-dac2-42b0-a119-161de48f03b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7b7da5bd-9ac9-4a43-b35e-b71323fc583d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig"}}
	{"specversion":"1.0","id":"5969bee8-d7a6-4837-82ca-947e57052a43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube"}}
	{"specversion":"1.0","id":"e76bbced-b012-452f-a9ed-fd90e302be76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5e5f9c73-53d5-400d-b74a-21ad5b8de898","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"926217c2-132c-42fe-a8e6-d1c34e3bad5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-639271" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-639271
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-897131 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-897131 --network=: (30.918071509s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-897131" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-897131
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-897131: (2.162342771s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-611990 --network=bridge
E1217 11:35:49.032999 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-611990 --network=bridge: (22.006812747s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-611990" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-611990
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-611990: (2.054908434s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.08s)

                                                
                                    
x
+
TestKicExistingNetwork (24.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1217 11:36:11.208501 1672941 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 11:36:11.225307 1672941 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 11:36:11.225373 1672941 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1217 11:36:11.225392 1672941 cli_runner.go:164] Run: docker network inspect existing-network
W1217 11:36:11.242792 1672941 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1217 11:36:11.242824 1672941 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1217 11:36:11.242844 1672941 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1217 11:36:11.242985 1672941 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 11:36:11.260774 1672941 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3d92c06bf7e1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:dc:f5:1a:95:c6} reservation:<nil>}
I1217 11:36:11.261194 1672941 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc3250}
I1217 11:36:11.261228 1672941 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1217 11:36:11.261285 1672941 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1217 11:36:11.310548 1672941 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-574553 --network=existing-network
E1217 11:36:16.737914 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-574553 --network=existing-network: (22.2808438s)
helpers_test.go:176: Cleaning up "existing-network-574553" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-574553
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-574553: (2.008029107s)
I1217 11:36:35.617124 1672941 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.43s)

                                                
                                    
x
+
TestKicCustomSubnet (23.34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-971203 --subnet=192.168.60.0/24
E1217 11:36:53.252205 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-971203 --subnet=192.168.60.0/24: (21.146213083s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-971203 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-971203" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-971203
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-971203: (2.172059355s)
--- PASS: TestKicCustomSubnet (23.34s)

                                                
                                    
x
+
TestKicStaticIP (23.99s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-491090 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-491090 --static-ip=192.168.200.200: (21.675932732s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-491090 ip
helpers_test.go:176: Cleaning up "static-ip-491090" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-491090
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-491090: (2.164871354s)
--- PASS: TestKicStaticIP (23.99s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-418404 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-418404 --driver=docker  --container-runtime=crio: (22.690342792s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-420872 --driver=docker  --container-runtime=crio
E1217 11:38:03.171972 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-420872 --driver=docker  --container-runtime=crio: (23.976267069s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-418404
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-420872
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-420872" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-420872
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-420872: (2.396808199s)
helpers_test.go:176: Cleaning up "first-418404" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-418404
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-418404: (2.388550604s)
--- PASS: TestMinikubeProfile (52.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-966598 --memory=3072 --mount-string /tmp/TestMountStartserial3253241100/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1217 11:38:16.326775 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-966598 --memory=3072 --mount-string /tmp/TestMountStartserial3253241100/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.145937891s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-966598 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-986652 --memory=3072 --mount-string /tmp/TestMountStartserial3253241100/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-986652 --memory=3072 --mount-string /tmp/TestMountStartserial3253241100/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.04802934s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-986652 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-966598 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-966598 --alsologtostderr -v=5: (1.697301047s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-986652 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-986652
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-986652: (1.262422569s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-986652
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-986652: (7.314048724s)
--- PASS: TestMountStart/serial/RestartStopped (8.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-986652 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-498322 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-498322 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m10.9210836s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-498322 -- rollout status deployment/busybox: (2.955419296s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- exec busybox-7b57f96db7-grflb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- exec busybox-7b57f96db7-k4tzx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- exec busybox-7b57f96db7-grflb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- exec busybox-7b57f96db7-k4tzx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- exec busybox-7b57f96db7-grflb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- exec busybox-7b57f96db7-k4tzx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- exec busybox-7b57f96db7-grflb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- exec busybox-7b57f96db7-grflb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- exec busybox-7b57f96db7-k4tzx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498322 -- exec busybox-7b57f96db7-k4tzx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-498322 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-498322 -v=5 --alsologtostderr: (25.306961328s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-498322 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp testdata/cp-test.txt multinode-498322:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp multinode-498322:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4199961942/001/cp-test_multinode-498322.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp multinode-498322:/home/docker/cp-test.txt multinode-498322-m02:/home/docker/cp-test_multinode-498322_multinode-498322-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m02 "sudo cat /home/docker/cp-test_multinode-498322_multinode-498322-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp multinode-498322:/home/docker/cp-test.txt multinode-498322-m03:/home/docker/cp-test_multinode-498322_multinode-498322-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m03 "sudo cat /home/docker/cp-test_multinode-498322_multinode-498322-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp testdata/cp-test.txt multinode-498322-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp multinode-498322-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4199961942/001/cp-test_multinode-498322-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp multinode-498322-m02:/home/docker/cp-test.txt multinode-498322:/home/docker/cp-test_multinode-498322-m02_multinode-498322.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322 "sudo cat /home/docker/cp-test_multinode-498322-m02_multinode-498322.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp multinode-498322-m02:/home/docker/cp-test.txt multinode-498322-m03:/home/docker/cp-test_multinode-498322-m02_multinode-498322-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m03 "sudo cat /home/docker/cp-test_multinode-498322-m02_multinode-498322-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp testdata/cp-test.txt multinode-498322-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp multinode-498322-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4199961942/001/cp-test_multinode-498322-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp multinode-498322-m03:/home/docker/cp-test.txt multinode-498322:/home/docker/cp-test_multinode-498322-m03_multinode-498322.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322 "sudo cat /home/docker/cp-test_multinode-498322-m03_multinode-498322.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 cp multinode-498322-m03:/home/docker/cp-test.txt multinode-498322-m02:/home/docker/cp-test_multinode-498322-m03_multinode-498322-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 ssh -n multinode-498322-m02 "sudo cat /home/docker/cp-test_multinode-498322-m03_multinode-498322-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-498322 node stop m03: (1.277785925s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-498322 status: exit status 7 (490.791883ms)

                                                
                                                
-- stdout --
	multinode-498322
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-498322-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-498322-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-498322 status --alsologtostderr: exit status 7 (497.709538ms)

                                                
                                                
-- stdout --
	multinode-498322
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-498322-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-498322-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:40:38.296233 1829447 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:40:38.296489 1829447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:40:38.296498 1829447 out.go:374] Setting ErrFile to fd 2...
	I1217 11:40:38.296501 1829447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:40:38.296722 1829447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:40:38.296906 1829447 out.go:368] Setting JSON to false
	I1217 11:40:38.296939 1829447 mustload.go:66] Loading cluster: multinode-498322
	I1217 11:40:38.297048 1829447 notify.go:221] Checking for updates...
	I1217 11:40:38.297273 1829447 config.go:182] Loaded profile config "multinode-498322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:40:38.297288 1829447 status.go:174] checking status of multinode-498322 ...
	I1217 11:40:38.297766 1829447 cli_runner.go:164] Run: docker container inspect multinode-498322 --format={{.State.Status}}
	I1217 11:40:38.318179 1829447 status.go:371] multinode-498322 host status = "Running" (err=<nil>)
	I1217 11:40:38.318204 1829447 host.go:66] Checking if "multinode-498322" exists ...
	I1217 11:40:38.318475 1829447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-498322
	I1217 11:40:38.336289 1829447 host.go:66] Checking if "multinode-498322" exists ...
	I1217 11:40:38.336651 1829447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:40:38.336706 1829447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-498322
	I1217 11:40:38.355174 1829447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34441 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/multinode-498322/id_rsa Username:docker}
	I1217 11:40:38.446260 1829447 ssh_runner.go:195] Run: systemctl --version
	I1217 11:40:38.453018 1829447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:40:38.466395 1829447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:40:38.524272 1829447 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 11:40:38.513643174 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:40:38.524834 1829447 kubeconfig.go:125] found "multinode-498322" server: "https://192.168.67.2:8443"
	I1217 11:40:38.524868 1829447 api_server.go:166] Checking apiserver status ...
	I1217 11:40:38.524909 1829447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:40:38.536968 1829447 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1291/cgroup
	W1217 11:40:38.545292 1829447 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1291/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:40:38.545338 1829447 ssh_runner.go:195] Run: ls
	I1217 11:40:38.549166 1829447 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1217 11:40:38.553490 1829447 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1217 11:40:38.553516 1829447 status.go:463] multinode-498322 apiserver status = Running (err=<nil>)
	I1217 11:40:38.553527 1829447 status.go:176] multinode-498322 status: &{Name:multinode-498322 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:40:38.553561 1829447 status.go:174] checking status of multinode-498322-m02 ...
	I1217 11:40:38.553904 1829447 cli_runner.go:164] Run: docker container inspect multinode-498322-m02 --format={{.State.Status}}
	I1217 11:40:38.572238 1829447 status.go:371] multinode-498322-m02 host status = "Running" (err=<nil>)
	I1217 11:40:38.572260 1829447 host.go:66] Checking if "multinode-498322-m02" exists ...
	I1217 11:40:38.572513 1829447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-498322-m02
	I1217 11:40:38.590525 1829447 host.go:66] Checking if "multinode-498322-m02" exists ...
	I1217 11:40:38.590860 1829447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:40:38.590936 1829447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-498322-m02
	I1217 11:40:38.609894 1829447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34446 SSHKeyPath:/home/jenkins/minikube-integration/21808-1669348/.minikube/machines/multinode-498322-m02/id_rsa Username:docker}
	I1217 11:40:38.699941 1829447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:40:38.712787 1829447 status.go:176] multinode-498322-m02 status: &{Name:multinode-498322-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:40:38.712824 1829447 status.go:174] checking status of multinode-498322-m03 ...
	I1217 11:40:38.713077 1829447 cli_runner.go:164] Run: docker container inspect multinode-498322-m03 --format={{.State.Status}}
	I1217 11:40:38.731443 1829447 status.go:371] multinode-498322-m03 host status = "Stopped" (err=<nil>)
	I1217 11:40:38.731467 1829447 status.go:384] host is not running, skipping remaining checks
	I1217 11:40:38.731496 1829447 status.go:176] multinode-498322-m03 status: &{Name:multinode-498322-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-498322 node start m03 -v=5 --alsologtostderr: (6.685783463s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (84.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-498322
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-498322
E1217 11:40:49.032808 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-498322: (31.434667937s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-498322 --wait=true -v=5 --alsologtostderr
E1217 11:41:53.252219 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-498322 --wait=true -v=5 --alsologtostderr: (52.636136296s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-498322
--- PASS: TestMultiNode/serial/RestartKeepsNodes (84.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-498322 node delete m03: (4.696313433s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-498322 stop: (28.445447509s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-498322 status: exit status 7 (109.179759ms)

                                                
                                                
-- stdout --
	multinode-498322
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-498322-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-498322 status --alsologtostderr: exit status 7 (107.99291ms)

                                                
                                                
-- stdout --
	multinode-498322
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-498322-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:42:44.235870 1839450 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:42:44.236129 1839450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:42:44.236137 1839450 out.go:374] Setting ErrFile to fd 2...
	I1217 11:42:44.236142 1839450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:42:44.236327 1839450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:42:44.236543 1839450 out.go:368] Setting JSON to false
	I1217 11:42:44.236579 1839450 mustload.go:66] Loading cluster: multinode-498322
	I1217 11:42:44.236640 1839450 notify.go:221] Checking for updates...
	I1217 11:42:44.236997 1839450 config.go:182] Loaded profile config "multinode-498322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:42:44.237019 1839450 status.go:174] checking status of multinode-498322 ...
	I1217 11:42:44.237649 1839450 cli_runner.go:164] Run: docker container inspect multinode-498322 --format={{.State.Status}}
	I1217 11:42:44.257882 1839450 status.go:371] multinode-498322 host status = "Stopped" (err=<nil>)
	I1217 11:42:44.257936 1839450 status.go:384] host is not running, skipping remaining checks
	I1217 11:42:44.257951 1839450 status.go:176] multinode-498322 status: &{Name:multinode-498322 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:42:44.258020 1839450 status.go:174] checking status of multinode-498322-m02 ...
	I1217 11:42:44.258343 1839450 cli_runner.go:164] Run: docker container inspect multinode-498322-m02 --format={{.State.Status}}
	I1217 11:42:44.280153 1839450 status.go:371] multinode-498322-m02 host status = "Stopped" (err=<nil>)
	I1217 11:42:44.280182 1839450 status.go:384] host is not running, skipping remaining checks
	I1217 11:42:44.280192 1839450 status.go:176] multinode-498322-m02 status: &{Name:multinode-498322-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-498322 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1217 11:43:03.171880 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-498322 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.03379606s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498322 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-498322
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-498322-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-498322-m02 --driver=docker  --container-runtime=crio: exit status 14 (81.748912ms)

                                                
                                                
-- stdout --
	* [multinode-498322-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-498322-m02' is duplicated with machine name 'multinode-498322-m02' in profile 'multinode-498322'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-498322-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-498322-m03 --driver=docker  --container-runtime=crio: (19.879732061s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-498322
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-498322: exit status 80 (299.622524ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-498322 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-498322-m03 already exists in multinode-498322-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-498322-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-498322-m03: (2.404533263s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.73s)

                                                
                                    
x
+
TestPreload (111.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-431403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1217 11:44:26.237877 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-431403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (47.579898861s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-431403 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-431403 image pull gcr.io/k8s-minikube/busybox: (2.201807451s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-431403
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-431403: (8.084978395s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-431403 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1217 11:45:49.032996 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-431403 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.581814112s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-431403 image list
helpers_test.go:176: Cleaning up "test-preload-431403" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-431403
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-431403: (2.442182699s)
--- PASS: TestPreload (111.13s)

                                                
                                    
x
+
TestScheduledStopUnix (98.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-816702 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-816702 --memory=3072 --driver=docker  --container-runtime=crio: (23.063363356s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816702 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 11:46:18.230994 1856760 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:46:18.231284 1856760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:46:18.231295 1856760 out.go:374] Setting ErrFile to fd 2...
	I1217 11:46:18.231299 1856760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:46:18.231496 1856760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:46:18.231787 1856760 out.go:368] Setting JSON to false
	I1217 11:46:18.231885 1856760 mustload.go:66] Loading cluster: scheduled-stop-816702
	I1217 11:46:18.232200 1856760 config.go:182] Loaded profile config "scheduled-stop-816702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:46:18.232271 1856760 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/config.json ...
	I1217 11:46:18.232456 1856760 mustload.go:66] Loading cluster: scheduled-stop-816702
	I1217 11:46:18.232575 1856760 config.go:182] Loaded profile config "scheduled-stop-816702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-816702 -n scheduled-stop-816702
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816702 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 11:46:18.643509 1856909 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:46:18.643814 1856909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:46:18.643825 1856909 out.go:374] Setting ErrFile to fd 2...
	I1217 11:46:18.643830 1856909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:46:18.644063 1856909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:46:18.644330 1856909 out.go:368] Setting JSON to false
	I1217 11:46:18.644523 1856909 daemonize_unix.go:73] killing process 1856794 as it is an old scheduled stop
	I1217 11:46:18.644652 1856909 mustload.go:66] Loading cluster: scheduled-stop-816702
	I1217 11:46:18.644996 1856909 config.go:182] Loaded profile config "scheduled-stop-816702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:46:18.645075 1856909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/config.json ...
	I1217 11:46:18.645247 1856909 mustload.go:66] Loading cluster: scheduled-stop-816702
	I1217 11:46:18.645341 1856909 config.go:182] Loaded profile config "scheduled-stop-816702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 11:46:18.650930 1672941 retry.go:31] will retry after 127.384µs: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.652089 1672941 retry.go:31] will retry after 127.396µs: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.653241 1672941 retry.go:31] will retry after 271.174µs: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.654387 1672941 retry.go:31] will retry after 462.347µs: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.655562 1672941 retry.go:31] will retry after 460.872µs: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.656712 1672941 retry.go:31] will retry after 1.103935ms: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.657883 1672941 retry.go:31] will retry after 836.51µs: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.659029 1672941 retry.go:31] will retry after 1.261506ms: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.661247 1672941 retry.go:31] will retry after 1.535781ms: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.663475 1672941 retry.go:31] will retry after 3.42837ms: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.667677 1672941 retry.go:31] will retry after 6.325948ms: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.674908 1672941 retry.go:31] will retry after 5.45631ms: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.681195 1672941 retry.go:31] will retry after 17.017274ms: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.698463 1672941 retry.go:31] will retry after 10.353889ms: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.709789 1672941 retry.go:31] will retry after 16.069827ms: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
I1217 11:46:18.726010 1672941 retry.go:31] will retry after 65.078148ms: open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816702 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-816702 -n scheduled-stop-816702
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-816702
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816702 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 11:46:44.557853 1857604 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:46:44.558132 1857604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:46:44.558143 1857604 out.go:374] Setting ErrFile to fd 2...
	I1217 11:46:44.558149 1857604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:46:44.558391 1857604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:46:44.558701 1857604 out.go:368] Setting JSON to false
	I1217 11:46:44.558804 1857604 mustload.go:66] Loading cluster: scheduled-stop-816702
	I1217 11:46:44.559169 1857604 config.go:182] Loaded profile config "scheduled-stop-816702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:46:44.559259 1857604 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/scheduled-stop-816702/config.json ...
	I1217 11:46:44.559467 1857604 mustload.go:66] Loading cluster: scheduled-stop-816702
	I1217 11:46:44.559608 1857604 config.go:182] Loaded profile config "scheduled-stop-816702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
E1217 11:46:53.252365 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1217 11:47:12.101411 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-414245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-816702
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-816702: exit status 7 (86.467024ms)

                                                
                                                
-- stdout --
	scheduled-stop-816702
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-816702 -n scheduled-stop-816702
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-816702 -n scheduled-stop-816702: exit status 7 (81.63393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-816702" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-816702
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-816702: (4.201903431s)
--- PASS: TestScheduledStopUnix (98.83s)

                                                
                                    
x
+
TestInsufficientStorage (8.93s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-006783 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-006783 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.430373287s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"05dbe668-1f8f-493a-95d6-6220af153fa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-006783] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c571a5d5-4766-4cfe-978d-2d00adee633e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21808"}}
	{"specversion":"1.0","id":"39ff012f-92fd-451a-9b67-08c9adee790b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"16bac3ca-cd22-443e-a297-8acc887f2be1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig"}}
	{"specversion":"1.0","id":"71f16a3b-c966-47fd-9a28-63aebeed9df1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube"}}
	{"specversion":"1.0","id":"5f17ab5d-56f2-4b41-9c36-a28f0281bdf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3fa2cc71-b864-4572-b163-8c5e1b42bcd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9cbebb82-ffbb-4a5b-b377-17ca91700885","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"00f756ad-5640-4384-8d39-141f9cb7ca30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"81611e9a-f0c0-4995-9415-55b798ca1fb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b8ccab9-c75f-4f6f-b6f2-1b0534cbf513","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"298a6790-485e-446c-a459-accd18ab698d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-006783\" primary control-plane node in \"insufficient-storage-006783\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"beb6bc74-91ae-4b5d-a0cd-112d90bf17f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"54009dcd-b636-4fe3-833e-9a365398ac10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c467936-c27c-4889-8e47-1e3927636b1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-006783 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-006783 --output=json --layout=cluster: exit status 7 (291.043415ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-006783","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-006783","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:47:40.642051 1860148 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-006783" does not appear in /home/jenkins/minikube-integration/21808-1669348/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-006783 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-006783 --output=json --layout=cluster: exit status 7 (288.216532ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-006783","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-006783","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:47:40.931185 1860263 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-006783" does not appear in /home/jenkins/minikube-integration/21808-1669348/kubeconfig
	E1217 11:47:40.941796 1860263 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/insufficient-storage-006783/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-006783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-006783
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-006783: (1.916115611s)
--- PASS: TestInsufficientStorage (8.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3336902329 start -p running-upgrade-003429 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3336902329 start -p running-upgrade-003429 --memory=3072 --vm-driver=docker  --container-runtime=crio: (24.970032513s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-003429 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-003429 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.354630108s)
helpers_test.go:176: Cleaning up "running-upgrade-003429" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-003429
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-003429: (2.433679559s)
--- PASS: TestRunningBinaryUpgrade (54.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (319.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.238496948s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-556754
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-556754: (14.05406195s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-556754 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-556754 status --format={{.Host}}: exit status 7 (99.878195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.849963003s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-556754 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (96.151623ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-556754] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-556754
	    minikube start -p kubernetes-upgrade-556754 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5567542 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-556754 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-556754 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.912897172s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-556754" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-556754
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-556754: (2.728614135s)
--- PASS: TestKubernetesUpgrade (319.05s)

                                                
                                    
x
+
TestMissingContainerUpgrade (92.2s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2372598576 start -p missing-upgrade-837067 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2372598576 start -p missing-upgrade-837067 --memory=3072 --driver=docker  --container-runtime=crio: (45.036550392s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-837067
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-837067: (1.992373409s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-837067
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-837067 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-837067 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.170813095s)
helpers_test.go:176: Cleaning up "missing-upgrade-837067" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-837067
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-837067: (2.41599324s)
--- PASS: TestMissingContainerUpgrade (92.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-057260 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-057260 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (106.257747ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-057260] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestPause/serial/Start (54.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-016656 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-016656 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.486218363s)
--- PASS: TestPause/serial/Start (54.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-057260 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1217 11:48:03.172239 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-057260 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.084004134s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-057260 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-057260 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-057260 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (18.80715685s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-057260 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-057260 status -o json: exit status 2 (386.21726ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-057260","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-057260
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-057260: (2.393140297s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (20.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-016656 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-016656 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.813560887s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (20.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (13.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-057260 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-057260 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.594491157s)
--- PASS: TestNoKubernetes/serial/Start (13.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21808-1669348/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-057260 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-057260 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.589903ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-057260
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-057260: (1.306008387s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-057260 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-057260 --driver=docker  --container-runtime=crio: (8.094383769s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-057260 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-057260 "sudo systemctl is-active --quiet service kubelet": exit status 1 (362.014374ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (288.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3840777825 start -p stopped-upgrade-287611 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3840777825 start -p stopped-upgrade-287611 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.880840076s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3840777825 -p stopped-upgrade-287611 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3840777825 -p stopped-upgrade-287611 stop: (2.590126852s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-287611 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-287611 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.41862333s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (288.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-213935 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-213935 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (178.822805ms)

                                                
                                                
-- stdout --
	* [false-213935] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:50:05.647229 1906662 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:50:05.647335 1906662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:05.647346 1906662 out.go:374] Setting ErrFile to fd 2...
	I1217 11:50:05.647352 1906662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:05.647616 1906662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1669348/.minikube/bin
	I1217 11:50:05.648089 1906662 out.go:368] Setting JSON to false
	I1217 11:50:05.649309 1906662 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":19951,"bootTime":1765952255,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:50:05.649378 1906662 start.go:143] virtualization: kvm guest
	I1217 11:50:05.654373 1906662 out.go:179] * [false-213935] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:50:05.656307 1906662 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:50:05.656389 1906662 notify.go:221] Checking for updates...
	I1217 11:50:05.659214 1906662 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:50:05.660507 1906662 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1669348/kubeconfig
	I1217 11:50:05.661587 1906662 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1669348/.minikube
	I1217 11:50:05.662930 1906662 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:50:05.664152 1906662 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:50:05.666073 1906662 config.go:182] Loaded profile config "cert-expiration-067996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:50:05.666193 1906662 config.go:182] Loaded profile config "kubernetes-upgrade-556754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:05.666293 1906662 config.go:182] Loaded profile config "stopped-upgrade-287611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 11:50:05.666407 1906662 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:50:05.692463 1906662 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 11:50:05.692597 1906662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:05.752726 1906662 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 11:50:05.742552665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 11:50:05.752846 1906662 docker.go:319] overlay module found
	I1217 11:50:05.754605 1906662 out.go:179] * Using the docker driver based on user configuration
	I1217 11:50:05.755904 1906662 start.go:309] selected driver: docker
	I1217 11:50:05.755918 1906662 start.go:927] validating driver "docker" against <nil>
	I1217 11:50:05.755944 1906662 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:50:05.757491 1906662 out.go:203] 
	W1217 11:50:05.758784 1906662 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1217 11:50:05.759890 1906662 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-213935 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-213935" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 11:49:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-556754
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 11:49:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-287611
contexts:
- context:
cluster: kubernetes-upgrade-556754
user: kubernetes-upgrade-556754
name: kubernetes-upgrade-556754
- context:
cluster: stopped-upgrade-287611
user: stopped-upgrade-287611
name: stopped-upgrade-287611
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-556754
user:
client-certificate: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.crt
client-key: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.key
- name: stopped-upgrade-287611
user:
client-certificate: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/stopped-upgrade-287611/client.crt
client-key: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/stopped-upgrade-287611/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-213935

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-213935"

                                                
                                                
----------------------- debugLogs end: false-213935 [took: 3.429812939s] --------------------------------
helpers_test.go:176: Cleaning up "false-213935" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-213935
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (48.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1217 11:51:53.252644 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.533443425s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (48.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-401285 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6087a098-1923-4b52-82a5-cfa6127e5a10] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6087a098-1923-4b52-82a5-cfa6127e5a10] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003777161s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-401285 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-401285 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-401285 --alsologtostderr -v=3: (16.097484905s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-401285 -n old-k8s-version-401285
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-401285 -n old-k8s-version-401285: exit status 7 (84.495556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-401285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1217 11:53:03.171702 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/functional-212713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-401285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.151452021s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-401285 -n old-k8s-version-401285
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-klmw2" [ad3e2463-5388-453d-8fe6-25428420edfd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003980249s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-klmw2" [ad3e2463-5388-453d-8fe6-25428420edfd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004403764s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-401285 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (49.596594935s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-401285 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (48.957323533s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (42.951262552s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-287611
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-287611: (2.652568198s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (25.699377587s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-737478 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5812a4e7-2a1f-4e57-a29f-bf4c78d30ffd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5812a4e7-2a1f-4e57-a29f-bf4c78d30ffd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004577415s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-737478 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-737478 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-737478 --alsologtostderr -v=3: (18.268016583s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-542273 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [93dc2ecf-3c1a-4f60-bd0e-6f961d537d2c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [93dc2ecf-3c1a-4f60-bd0e-6f961d537d2c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003364501s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-542273 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-601829 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-601829 --alsologtostderr -v=3: (2.647130243s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-601829 -n newest-cni-601829
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-601829 -n newest-cni-601829: exit status 7 (88.734008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-601829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-601829 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (10.289278544s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-601829 -n newest-cni-601829
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-542273 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-542273 --alsologtostderr -v=3: (16.666112194s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-382022 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [da1d1f67-6ece-4cec-89b1-7a562b3d92a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [da1d1f67-6ece-4cec-89b1-7a562b3d92a5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.008459315s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-382022 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737478 -n no-preload-737478
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737478 -n no-preload-737478: exit status 7 (130.268522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-737478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-737478 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (51.666125684s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737478 -n no-preload-737478
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-601829 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-382022 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-382022 --alsologtostderr -v=3: (16.980679797s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-542273 -n embed-certs-542273
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-542273 -n embed-certs-542273: exit status 7 (104.511638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-542273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-542273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (45.641026689s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-542273 -n embed-certs-542273
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1217 11:54:56.329060 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.434233035s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022: exit status 7 (123.575535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-382022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-382022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (51.162770875s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382022 -n default-k8s-diff-port-382022
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-t9pxx" [00450ac9-1978-434f-8ea4-d98b45387d8b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003335522s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-213935 "pgrep -a kubelet"
I1217 11:55:36.789190 1672941 config.go:182] Loaded profile config "auto-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-213935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-445w9" [90a86435-827d-4098-8998-53f6fe921208] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-445w9" [90a86435-827d-4098-8998-53f6fe921208] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003878903s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-4l444" [2c145daa-4b13-4d9d-9c48-dac61c781395] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004040185s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-t9pxx" [00450ac9-1978-434f-8ea4-d98b45387d8b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004268784s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-737478 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-213935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-4l444" [2c145daa-4b13-4d9d-9c48-dac61c781395] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003568797s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-542273 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-737478 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-542273 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.218359717s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-68hlv" [bb7da256-1b37-4ce4-9985-dd068a6f4b9f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004219081s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (58.190463571s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-68hlv" [bb7da256-1b37-4ce4-9985-dd068a6f4b9f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004750817s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-382022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.875648666s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-382022 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m8.78526259s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-bnd4n" [d47a9eb0-62e2-46b2-81f7-eae15bcb6279] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005132077s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-213935 "pgrep -a kubelet"
I1217 11:56:47.349194 1672941 config.go:182] Loaded profile config "kindnet-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-213935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-m42cp" [f259c148-eeaa-475a-9d9d-80147f509855] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-m42cp" [f259c148-eeaa-475a-9d9d-80147f509855] Running
E1217 11:56:53.252465 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/addons-767877/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.0040316s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-213935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-mjvwz" [05c7808f-c996-4c54-9fe5-365fc1137c40] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-mjvwz" [05c7808f-c996-4c54-9fe5-365fc1137c40] Running
E1217 11:57:02.008868 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004022126s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-213935 "pgrep -a kubelet"
I1217 11:57:01.492954 1672941 config.go:182] Loaded profile config "custom-flannel-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-213935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mvpjk" [6c65f8cf-9681-44a7-ac92-f39260096128] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mvpjk" [6c65f8cf-9681-44a7-ac92-f39260096128] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003648962s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-213935 "pgrep -a kubelet"
I1217 11:57:07.079620 1672941 config.go:182] Loaded profile config "calico-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-213935 replace --force -f testdata/netcat-deployment.yaml
E1217 11:57:07.131178 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-24x6q" [ae4623b8-fb5b-4c6a-b577-17a88560af39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-24x6q" [ae4623b8-fb5b-4c6a-b577-17a88560af39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004631334s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-213935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-213935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.132392663s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-213935 "pgrep -a kubelet"
I1217 11:57:28.765447 1672941 config.go:182] Loaded profile config "enable-default-cni-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-213935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-t7s6v" [dc290b68-c9e0-48f9-9332-d0085fc5199b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-t7s6v" [dc290b68-c9e0-48f9-9332-d0085fc5199b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.008283608s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-213935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m5.421562563s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-213935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-g7nrw" [dfbd8798-04db-4ef6-9870-8cd6231ca87e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003746668s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-213935 "pgrep -a kubelet"
I1217 11:58:14.014512 1672941 config.go:182] Loaded profile config "flannel-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-213935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-dq46t" [40b7b0f0-93ac-49dd-bb6d-d4f38477a3b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-dq46t" [40b7b0f0-93ac-49dd-bb6d-d4f38477a3b8] Running
E1217 11:58:18.817007 1672941 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/old-k8s-version-401285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003912672s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-213935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-213935 "pgrep -a kubelet"
I1217 11:58:37.944288 1672941 config.go:182] Loaded profile config "bridge-213935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-213935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mhlh2" [9f4d9335-e26f-4be5-8f2d-e1a31bc0091c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mhlh2" [9f4d9335-e26f-4be5-8f2d-e1a31bc0091c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004239534s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-213935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-213935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
150 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
383 TestStartStop/group/disable-driver-mounts 0.2
387 TestNetworkPlugins/group/kubenet 3.7
395 TestNetworkPlugins/group/cilium 4.4
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-618082" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-618082
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-213935 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-213935" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 11:49:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-556754
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 11:49:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-287611
contexts:
- context:
cluster: kubernetes-upgrade-556754
user: kubernetes-upgrade-556754
name: kubernetes-upgrade-556754
- context:
cluster: stopped-upgrade-287611
user: stopped-upgrade-287611
name: stopped-upgrade-287611
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-556754
user:
client-certificate: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.crt
client-key: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.key
- name: stopped-upgrade-287611
user:
client-certificate: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/stopped-upgrade-287611/client.crt
client-key: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/stopped-upgrade-287611/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-213935

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-213935"

                                                
                                                
----------------------- debugLogs end: kubenet-213935 [took: 3.498564671s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-213935" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-213935
--- SKIP: TestNetworkPlugins/group/kubenet (3.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-213935 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-213935" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 11:49:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-556754
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-1669348/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 11:49:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-287611
contexts:
- context:
cluster: kubernetes-upgrade-556754
user: kubernetes-upgrade-556754
name: kubernetes-upgrade-556754
- context:
cluster: stopped-upgrade-287611
user: stopped-upgrade-287611
name: stopped-upgrade-287611
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-556754
user:
client-certificate: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.crt
client-key: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/kubernetes-upgrade-556754/client.key
- name: stopped-upgrade-287611
user:
client-certificate: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/stopped-upgrade-287611/client.crt
client-key: /home/jenkins/minikube-integration/21808-1669348/.minikube/profiles/stopped-upgrade-287611/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-213935

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-213935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-213935"

                                                
                                                
----------------------- debugLogs end: cilium-213935 [took: 4.208912783s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-213935" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-213935
--- SKIP: TestNetworkPlugins/group/cilium (4.40s)

                                                
                                    
Copied to clipboard